00:00:00.001 Started by upstream project "autotest-nightly" build number 4335 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3698 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.155 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.203 Fetching changes from the remote Git repository 00:00:00.207 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.242 Using shallow fetch with depth 1 00:00:00.242 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.242 > git --version # timeout=10 00:00:00.272 > git --version # 'git version 2.39.2' 00:00:00.272 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.154 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.165 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.176 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.176 > git config core.sparsecheckout # timeout=10 00:00:08.186 > git read-tree -mu HEAD # timeout=10 00:00:08.200 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.223 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.223 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.306 [Pipeline] Start of Pipeline 00:00:08.320 [Pipeline] library 00:00:08.322 Loading library shm_lib@master 00:00:08.322 Library shm_lib@master is cached. Copying from home. 00:00:08.338 [Pipeline] node 00:00:08.350 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.352 [Pipeline] { 00:00:08.361 [Pipeline] catchError 00:00:08.363 [Pipeline] { 00:00:08.374 [Pipeline] wrap 00:00:08.381 [Pipeline] { 00:00:08.389 [Pipeline] stage 00:00:08.391 [Pipeline] { (Prologue) 00:00:08.407 [Pipeline] echo 00:00:08.408 Node: VM-host-SM9 00:00:08.412 [Pipeline] cleanWs 00:00:08.421 [WS-CLEANUP] Deleting project workspace... 00:00:08.421 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.428 [WS-CLEANUP] done 00:00:08.612 [Pipeline] setCustomBuildProperty 00:00:08.681 [Pipeline] httpRequest 00:00:09.011 [Pipeline] echo 00:00:09.012 Sorcerer 10.211.164.20 is alive 00:00:09.021 [Pipeline] retry 00:00:09.023 [Pipeline] { 00:00:09.037 [Pipeline] httpRequest 00:00:09.041 HttpMethod: GET 00:00:09.041 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.042 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.059 Response Code: HTTP/1.1 200 OK 00:00:09.060 Success: Status code 200 is in the accepted range: 200,404 00:00:09.061 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.971 [Pipeline] } 00:00:23.989 [Pipeline] // retry 00:00:23.996 [Pipeline] sh 00:00:24.278 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.299 [Pipeline] httpRequest 00:00:24.712 [Pipeline] echo 00:00:24.713 Sorcerer 10.211.164.20 is alive 00:00:24.722 [Pipeline] retry 00:00:24.724 [Pipeline] { 00:00:24.738 [Pipeline] httpRequest 00:00:24.742 HttpMethod: GET 00:00:24.742 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:24.743 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:24.759 Response Code: HTTP/1.1 200 OK 00:00:24.759 Success: Status code 200 is in the accepted range: 200,404 00:00:24.760 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:52.929 [Pipeline] } 00:00:52.950 [Pipeline] // retry 00:00:52.961 [Pipeline] sh 00:00:53.243 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:56.543 [Pipeline] sh 00:00:56.822 + git -C spdk log --oneline -n5 00:00:56.822 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:00:56.822 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:00:56.822 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:00:56.822 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:00:56.822 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:00:56.841 [Pipeline] writeFile 00:00:56.856 [Pipeline] sh 00:00:57.136 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.148 [Pipeline] sh 00:00:57.429 + cat autorun-spdk.conf 00:00:57.429 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.429 SPDK_TEST_NVMF=1 00:00:57.429 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.429 SPDK_TEST_URING=1 00:00:57.429 SPDK_TEST_VFIOUSER=1 00:00:57.429 SPDK_TEST_USDT=1 00:00:57.429 SPDK_RUN_ASAN=1 00:00:57.429 SPDK_RUN_UBSAN=1 00:00:57.429 NET_TYPE=virt 00:00:57.429 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.436 RUN_NIGHTLY=1 00:00:57.438 [Pipeline] } 00:00:57.451 [Pipeline] // stage 00:00:57.466 [Pipeline] stage 00:00:57.468 [Pipeline] { (Run VM) 00:00:57.480 [Pipeline] sh 00:00:57.760 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.760 + echo 'Start stage prepare_nvme.sh' 00:00:57.761 Start stage prepare_nvme.sh 00:00:57.761 + [[ -n 2 ]] 00:00:57.761 + disk_prefix=ex2 00:00:57.761 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:57.761 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:57.761 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:57.761 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.761 ++ SPDK_TEST_NVMF=1 00:00:57.761 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.761 ++ SPDK_TEST_URING=1 00:00:57.761 ++ SPDK_TEST_VFIOUSER=1 00:00:57.761 ++ SPDK_TEST_USDT=1 00:00:57.761 ++ SPDK_RUN_ASAN=1 00:00:57.761 ++ SPDK_RUN_UBSAN=1 00:00:57.761 ++ NET_TYPE=virt 00:00:57.761 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.761 ++ RUN_NIGHTLY=1 00:00:57.761 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:57.761 + nvme_files=() 00:00:57.761 + declare -A nvme_files 00:00:57.761 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.761 + nvme_files['nvme.img']=5G 00:00:57.761 + nvme_files['nvme-cmb.img']=5G 00:00:57.761 + nvme_files['nvme-multi0.img']=4G 00:00:57.761 + nvme_files['nvme-multi1.img']=4G 00:00:57.761 + nvme_files['nvme-multi2.img']=4G 00:00:57.761 + nvme_files['nvme-openstack.img']=8G 00:00:57.761 + nvme_files['nvme-zns.img']=5G 00:00:57.761 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.761 + (( SPDK_TEST_FTL == 1 )) 00:00:57.761 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.761 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.761 + for nvme in "${!nvme_files[@]}" 00:00:57.761 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:57.761 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.761 + for nvme in "${!nvme_files[@]}" 00:00:57.761 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:57.761 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.761 + for nvme in "${!nvme_files[@]}" 00:00:57.761 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:57.761 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.761 + for nvme in "${!nvme_files[@]}" 00:00:57.761 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:58.020 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.020 + for nvme in "${!nvme_files[@]}" 00:00:58.020 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:58.020 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.020 + for nvme in "${!nvme_files[@]}" 00:00:58.020 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:58.020 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.020 + for nvme in "${!nvme_files[@]}" 00:00:58.020 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:58.279 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.279 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:58.279 + echo 'End stage prepare_nvme.sh' 00:00:58.279 End stage prepare_nvme.sh 00:00:58.294 [Pipeline] sh 00:00:58.590 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.590 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:58.590 00:00:58.590 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:58.590 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:58.590 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:58.590 HELP=0 00:00:58.590 DRY_RUN=0 00:00:58.590 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:58.590 NVME_DISKS_TYPE=nvme,nvme, 00:00:58.590 NVME_AUTO_CREATE=0 00:00:58.590 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:58.590 NVME_CMB=,, 00:00:58.590 NVME_PMR=,, 00:00:58.590 NVME_ZNS=,, 00:00:58.590 NVME_MS=,, 00:00:58.590 NVME_FDP=,, 00:00:58.590 SPDK_VAGRANT_DISTRO=fedora39 00:00:58.590 SPDK_VAGRANT_VMCPU=10 00:00:58.590 SPDK_VAGRANT_VMRAM=12288 00:00:58.590 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.590 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:58.590 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.590 SPDK_OPENSTACK_NETWORK=0 00:00:58.590 VAGRANT_PACKAGE_BOX=0 00:00:58.590 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.590 FORCE_DISTRO=true 00:00:58.590 VAGRANT_BOX_VERSION= 00:00:58.590 EXTRA_VAGRANTFILES= 00:00:58.590 NIC_MODEL=e1000 00:00:58.590 00:00:58.590 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:58.590 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.878 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.137 ==> default: Creating image (snapshot of base box volume). 00:01:02.396 ==> default: Creating domain with the following settings... 00:01:02.396 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733366732_fb49263b14c7f3aef725 00:01:02.396 ==> default: -- Domain type: kvm 00:01:02.396 ==> default: -- Cpus: 10 00:01:02.396 ==> default: -- Feature: acpi 00:01:02.396 ==> default: -- Feature: apic 00:01:02.396 ==> default: -- Feature: pae 00:01:02.396 ==> default: -- Memory: 12288M 00:01:02.396 ==> default: -- Memory Backing: hugepages: 00:01:02.396 ==> default: -- Management MAC: 00:01:02.396 ==> default: -- Loader: 00:01:02.396 ==> default: -- Nvram: 00:01:02.396 ==> default: -- Base box: spdk/fedora39 00:01:02.396 ==> default: -- Storage pool: default 00:01:02.396 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733366732_fb49263b14c7f3aef725.img (20G) 00:01:02.396 ==> default: -- Volume Cache: default 00:01:02.396 ==> default: -- Kernel: 00:01:02.396 ==> default: -- Initrd: 00:01:02.396 ==> default: -- Graphics Type: vnc 00:01:02.396 ==> default: -- Graphics Port: -1 00:01:02.396 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.396 ==> default: -- Graphics Password: Not defined 00:01:02.396 ==> default: -- Video Type: cirrus 00:01:02.396 ==> default: -- Video VRAM: 9216 00:01:02.396 ==> default: -- Sound Type: 00:01:02.396 ==> default: -- Keymap: en-us 00:01:02.396 ==> default: -- TPM Path: 00:01:02.396 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.396 ==> default: -- Command line args: 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.396 ==> default: -> value=-drive, 00:01:02.396 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.396 ==> default: -> value=-drive, 00:01:02.396 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.396 ==> default: -> value=-drive, 00:01:02.396 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.396 ==> default: -> value=-drive, 00:01:02.396 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:02.396 ==> default: -> value=-device, 00:01:02.396 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.396 ==> default: Creating shared folders metadata... 00:01:02.396 ==> default: Starting domain. 00:01:03.777 ==> default: Waiting for domain to get an IP address... 00:01:21.862 ==> default: Waiting for SSH to become available... 00:01:21.862 ==> default: Configuring and enabling network interfaces... 00:01:24.402 default: SSH address: 192.168.121.127:22 00:01:24.402 default: SSH username: vagrant 00:01:24.402 default: SSH auth method: private key 00:01:26.375 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.501 ==> default: Mounting SSHFS shared folder... 00:01:35.875 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:35.875 ==> default: Checking Mount.. 00:01:37.252 ==> default: Folder Successfully Mounted! 00:01:37.252 ==> default: Running provisioner: file... 00:01:37.819 default: ~/.gitconfig => .gitconfig 00:01:38.388 00:01:38.388 SUCCESS! 00:01:38.388 00:01:38.388 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:38.388 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.388 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:38.388 00:01:38.398 [Pipeline] } 00:01:38.417 [Pipeline] // stage 00:01:38.427 [Pipeline] dir 00:01:38.428 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:38.430 [Pipeline] { 00:01:38.445 [Pipeline] catchError 00:01:38.447 [Pipeline] { 00:01:38.462 [Pipeline] sh 00:01:38.743 + vagrant ssh-config --host vagrant 00:01:38.743 + sed -ne /^Host/,$p 00:01:38.743 + tee ssh_conf 00:01:42.034 Host vagrant 00:01:42.034 HostName 192.168.121.127 00:01:42.034 User vagrant 00:01:42.034 Port 22 00:01:42.034 UserKnownHostsFile /dev/null 00:01:42.034 StrictHostKeyChecking no 00:01:42.034 PasswordAuthentication no 00:01:42.034 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.034 IdentitiesOnly yes 00:01:42.034 LogLevel FATAL 00:01:42.034 ForwardAgent yes 00:01:42.034 ForwardX11 yes 00:01:42.034 00:01:42.049 [Pipeline] withEnv 00:01:42.051 [Pipeline] { 00:01:42.066 [Pipeline] sh 00:01:42.346 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.346 source /etc/os-release 00:01:42.346 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.346 # Minimal, systemd-like check. 00:01:42.346 if [[ -e /.dockerenv ]]; then 00:01:42.346 # Clear garbage from the node's name: 00:01:42.346 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.346 # $HOSTNAME is the actual container id 00:01:42.346 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.347 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.347 # We can assume this is a mount from a host where container is running, 00:01:42.347 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.347 container="$(< /etc/hostname) ($agent)" 00:01:42.347 else 00:01:42.347 # Fallback 00:01:42.347 container=$agent 00:01:42.347 fi 00:01:42.347 fi 00:01:42.347 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.347 00:01:42.616 [Pipeline] } 00:01:42.632 [Pipeline] // withEnv 00:01:42.641 [Pipeline] setCustomBuildProperty 00:01:42.655 [Pipeline] stage 00:01:42.657 [Pipeline] { (Tests) 00:01:42.674 [Pipeline] sh 00:01:42.952 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.226 [Pipeline] sh 00:01:43.507 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.781 [Pipeline] timeout 00:01:43.781 Timeout set to expire in 1 hr 0 min 00:01:43.783 [Pipeline] { 00:01:43.799 [Pipeline] sh 00:01:44.081 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.648 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:44.658 [Pipeline] sh 00:01:44.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:45.210 [Pipeline] sh 00:01:45.500 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.530 [Pipeline] sh 00:01:45.818 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:46.078 ++ readlink -f spdk_repo 00:01:46.078 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.078 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.078 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.078 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.078 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.078 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.078 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.078 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:46.078 + cd /home/vagrant/spdk_repo 00:01:46.078 + source /etc/os-release 00:01:46.078 ++ NAME='Fedora Linux' 00:01:46.078 ++ VERSION='39 (Cloud Edition)' 00:01:46.078 ++ ID=fedora 00:01:46.078 ++ VERSION_ID=39 00:01:46.078 ++ VERSION_CODENAME= 00:01:46.078 ++ PLATFORM_ID=platform:f39 00:01:46.078 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.078 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.078 ++ LOGO=fedora-logo-icon 00:01:46.078 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.078 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.078 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.078 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.078 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.078 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.078 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.078 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.078 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.078 ++ SUPPORT_END=2024-11-12 00:01:46.078 ++ VARIANT='Cloud Edition' 00:01:46.078 ++ VARIANT_ID=cloud 00:01:46.078 + uname -a 00:01:46.078 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.078 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:46.336 Hugepages 00:01:46.336 node hugesize free / total 00:01:46.337 node0 1048576kB 0 / 0 00:01:46.337 node0 2048kB 0 / 0 00:01:46.337 00:01:46.337 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.337 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.596 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.597 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:46.597 + rm -f /tmp/spdk-ld-path 00:01:46.597 + source autorun-spdk.conf 00:01:46.597 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.597 ++ SPDK_TEST_NVMF=1 00:01:46.597 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.597 ++ SPDK_TEST_URING=1 00:01:46.597 ++ SPDK_TEST_VFIOUSER=1 00:01:46.597 ++ SPDK_TEST_USDT=1 00:01:46.597 ++ SPDK_RUN_ASAN=1 00:01:46.597 ++ SPDK_RUN_UBSAN=1 00:01:46.597 ++ NET_TYPE=virt 00:01:46.597 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.597 ++ RUN_NIGHTLY=1 00:01:46.597 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.597 + [[ -n '' ]] 00:01:46.597 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.597 + for M in /var/spdk/build-*-manifest.txt 00:01:46.597 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:46.597 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.597 + for M in /var/spdk/build-*-manifest.txt 00:01:46.597 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.597 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.597 + for M in /var/spdk/build-*-manifest.txt 00:01:46.597 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.597 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.597 ++ uname 00:01:46.597 + [[ Linux == \L\i\n\u\x ]] 00:01:46.597 + sudo dmesg -T 00:01:46.597 + sudo dmesg --clear 00:01:46.597 + dmesg_pid=5253 00:01:46.597 + [[ Fedora Linux == FreeBSD ]] 00:01:46.597 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.597 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.597 + sudo dmesg -Tw 00:01:46.597 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.597 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.597 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.597 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.597 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.597 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.597 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.597 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.597 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.597 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.597 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.597 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.597 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.597 02:46:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:46.597 02:46:17 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.597 02:46:17 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:46.597 02:46:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:46.597 02:46:17 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.857 02:46:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:46.857 02:46:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:46.857 02:46:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:46.857 02:46:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.857 02:46:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.857 02:46:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.857 02:46:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.857 02:46:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.857 02:46:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.857 02:46:17 -- paths/export.sh@5 -- $ export PATH 00:01:46.857 02:46:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.857 02:46:17 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:46.857 02:46:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:46.857 02:46:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733366777.XXXXXX 00:01:46.857 02:46:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733366777.GJsk1f 00:01:46.857 02:46:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:46.857 02:46:17 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:46.857 02:46:17 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:46.857 02:46:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:46.857 02:46:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.857 02:46:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:46.857 02:46:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:46.857 02:46:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.857 02:46:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:46.857 02:46:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:46.857 02:46:17 -- pm/common@17 -- $ local monitor 00:01:46.857 02:46:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.857 02:46:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.857 02:46:17 -- pm/common@25 -- $ sleep 1 00:01:46.857 02:46:17 -- pm/common@21 -- $ date +%s 00:01:46.857 02:46:17 -- pm/common@21 -- $ date +%s 00:01:46.857 02:46:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733366777 00:01:46.857 02:46:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733366777 00:01:46.857 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733366777_collect-cpu-load.pm.log 00:01:46.857 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733366777_collect-vmstat.pm.log 00:01:47.794 02:46:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:47.794 02:46:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.794 02:46:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.794 02:46:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:47.794 02:46:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.794 Thu Dec 5 02:46:18 AM UTC 2024 00:01:47.794 02:46:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.794 v25.01-pre-296-g8d3947977 00:01:47.794 02:46:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:47.794 02:46:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:47.794 02:46:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:47.794 02:46:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:47.794 02:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.794 ************************************ 00:01:47.794 START TEST asan 00:01:47.794 ************************************ 00:01:47.794 using asan 00:01:47.794 02:46:18 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:47.794 00:01:47.794 real 0m0.000s 00:01:47.794 user 0m0.000s 00:01:47.794 sys 0m0.000s 00:01:47.794 02:46:18 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:47.794 02:46:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.795 ************************************ 00:01:47.795 END TEST asan 00:01:47.795 ************************************ 00:01:47.795 02:46:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.795 02:46:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.795 02:46:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:47.795 02:46:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:47.795 02:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.795 ************************************ 00:01:47.795 START TEST ubsan 00:01:47.795 ************************************ 00:01:47.795 using ubsan 00:01:47.795 02:46:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:47.795 00:01:47.795 real 0m0.000s 00:01:47.795 user 0m0.000s 00:01:47.795 sys 0m0.000s 00:01:47.795 02:46:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:47.795 ************************************ 00:01:47.795 02:46:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.795 END TEST ubsan 00:01:47.795 ************************************ 00:01:47.795 02:46:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:47.795 02:46:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:47.795 02:46:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:47.795 02:46:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:48.052 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:48.052 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.618 Using 'verbs' RDMA provider 00:02:01.853 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:16.751 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:16.751 Creating mk/config.mk...done. 00:02:16.751 Creating mk/cc.flags.mk...done. 00:02:16.751 Type 'make' to build. 00:02:16.751 02:46:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:16.751 02:46:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.751 02:46:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.751 02:46:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.751 ************************************ 00:02:16.751 START TEST make 00:02:16.751 ************************************ 00:02:16.751 02:46:45 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:16.751 make[1]: Nothing to be done for 'all'. 00:02:16.751 The Meson build system 00:02:16.752 Version: 1.5.0 00:02:16.752 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:16.752 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:16.752 Build type: native build 00:02:16.752 Project name: libvfio-user 00:02:16.752 Project version: 0.0.1 00:02:16.752 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.752 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.752 Host machine cpu family: x86_64 00:02:16.752 Host machine cpu: x86_64 00:02:16.752 Run-time dependency threads found: YES 00:02:16.752 Library dl found: YES 00:02:16.752 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.752 Run-time dependency json-c found: YES 0.17 00:02:16.752 Run-time dependency cmocka found: YES 1.1.7 00:02:16.752 Program pytest-3 found: NO 00:02:16.752 Program flake8 found: NO 00:02:16.752 Program misspell-fixer found: NO 00:02:16.752 Program restructuredtext-lint found: NO 00:02:16.752 Program valgrind found: YES (/usr/bin/valgrind) 00:02:16.752 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.752 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.752 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.752 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.752 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:16.752 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:16.752 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.752 Build targets in project: 8 00:02:16.752 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:16.752 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:16.752 00:02:16.752 libvfio-user 0.0.1 00:02:16.752 00:02:16.752 User defined options 00:02:16.752 buildtype : debug 00:02:16.752 default_library: shared 00:02:16.752 libdir : /usr/local/lib 00:02:16.752 00:02:16.752 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.320 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:17.320 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:17.320 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:17.320 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:17.320 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:17.320 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:17.320 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:17.320 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:17.320 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:17.320 [9/37] Compiling C object samples/null.p/null.c.o 00:02:17.579 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:17.579 [11/37] Compiling C object samples/server.p/server.c.o 00:02:17.579 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:17.579 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:17.579 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:17.579 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:17.579 [16/37] Compiling C object samples/client.p/client.c.o 00:02:17.579 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:17.579 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:17.579 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:17.579 [20/37] Linking target samples/client 00:02:17.579 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:17.579 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:17.579 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:17.579 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:17.579 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:17.579 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:17.579 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:17.579 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:17.836 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:17.836 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:17.836 [31/37] Linking target test/unit_tests 00:02:17.836 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:17.836 [33/37] Linking target samples/null 00:02:17.836 [34/37] Linking target samples/gpio-pci-idio-16 00:02:17.836 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:17.836 [36/37] Linking target samples/server 00:02:17.836 [37/37] Linking target samples/lspci 00:02:17.836 INFO: autodetecting backend as ninja 00:02:17.836 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:17.836 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:18.402 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:18.402 ninja: no work to do. 00:02:28.370 The Meson build system 00:02:28.370 Version: 1.5.0 00:02:28.370 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:28.370 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:28.370 Build type: native build 00:02:28.370 Program cat found: YES (/usr/bin/cat) 00:02:28.370 Project name: DPDK 00:02:28.370 Project version: 24.03.0 00:02:28.370 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:28.370 C linker for the host machine: cc ld.bfd 2.40-14 00:02:28.370 Host machine cpu family: x86_64 00:02:28.370 Host machine cpu: x86_64 00:02:28.370 Message: ## Building in Developer Mode ## 00:02:28.370 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.370 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:28.370 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.370 Program python3 found: YES (/usr/bin/python3) 00:02:28.370 Program cat found: YES (/usr/bin/cat) 00:02:28.370 Compiler for C supports arguments -march=native: YES 00:02:28.370 Checking for size of "void *" : 8 00:02:28.370 Checking for size of "void *" : 8 (cached) 00:02:28.370 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:28.370 Library m found: YES 00:02:28.370 Library numa found: YES 00:02:28.370 Has header "numaif.h" : YES 00:02:28.370 Library fdt found: NO 00:02:28.370 Library execinfo found: NO 00:02:28.370 Has header "execinfo.h" : YES 00:02:28.370 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:28.370 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.370 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.370 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.370 Run-time dependency openssl found: YES 3.1.1 00:02:28.370 Run-time dependency libpcap found: YES 1.10.4 00:02:28.370 Has header "pcap.h" with dependency libpcap: YES 00:02:28.370 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.370 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.370 Compiler for C supports arguments -Wformat: YES 00:02:28.370 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.370 Compiler for C supports arguments -Wformat-security: NO 00:02:28.370 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.370 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.370 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.370 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.370 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.370 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.370 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.370 Compiler for C supports arguments -Wundef: YES 00:02:28.370 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.370 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.370 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.370 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.370 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.370 Program objdump found: YES (/usr/bin/objdump) 00:02:28.370 Compiler for C supports arguments -mavx512f: YES 00:02:28.370 Checking if "AVX512 checking" compiles: YES 00:02:28.370 Fetching value of define "__SSE4_2__" : 1 00:02:28.370 Fetching value of define "__AES__" : 1 00:02:28.370 Fetching value of define "__AVX__" : 1 00:02:28.370 Fetching value of define "__AVX2__" : 1 00:02:28.370 Fetching value of define "__AVX512BW__" : (undefined) 00:02:28.370 Fetching value of define "__AVX512CD__" : (undefined) 00:02:28.370 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:28.370 Fetching value of define "__AVX512F__" : (undefined) 00:02:28.370 Fetching value of define "__AVX512VL__" : (undefined) 00:02:28.370 Fetching value of define "__PCLMUL__" : 1 00:02:28.370 Fetching value of define "__RDRND__" : 1 00:02:28.370 Fetching value of define "__RDSEED__" : 1 00:02:28.370 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.370 Fetching value of define "__znver1__" : (undefined) 00:02:28.370 Fetching value of define "__znver2__" : (undefined) 00:02:28.370 Fetching value of define "__znver3__" : (undefined) 00:02:28.370 Fetching value of define "__znver4__" : (undefined) 00:02:28.370 Library asan found: YES 00:02:28.370 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.370 Message: lib/log: Defining dependency "log" 00:02:28.370 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.370 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.370 Library rt found: YES 00:02:28.370 Checking for function "getentropy" : NO 00:02:28.370 Message: lib/eal: Defining dependency "eal" 00:02:28.370 Message: lib/ring: Defining dependency "ring" 00:02:28.370 Message: lib/rcu: Defining dependency "rcu" 00:02:28.370 Message: lib/mempool: Defining dependency "mempool" 00:02:28.370 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.370 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.370 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.370 Compiler for C supports arguments -mpclmul: YES 00:02:28.370 Compiler for C supports arguments -maes: YES 00:02:28.370 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.370 Compiler for C supports arguments -mavx512bw: YES 00:02:28.370 Compiler for C supports arguments -mavx512dq: YES 00:02:28.370 Compiler for C supports arguments -mavx512vl: YES 00:02:28.370 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.370 Compiler for C supports arguments -mavx2: YES 00:02:28.370 Compiler for C supports arguments -mavx: YES 00:02:28.370 Message: lib/net: Defining dependency "net" 00:02:28.370 Message: lib/meter: Defining dependency "meter" 00:02:28.370 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.370 Message: lib/pci: Defining dependency "pci" 00:02:28.370 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.370 Message: lib/hash: Defining dependency "hash" 00:02:28.370 Message: lib/timer: Defining dependency "timer" 00:02:28.370 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.370 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.370 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.370 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.370 Message: lib/power: Defining dependency "power" 00:02:28.370 Message: lib/reorder: Defining dependency "reorder" 00:02:28.370 Message: lib/security: Defining dependency "security" 00:02:28.370 Has header "linux/userfaultfd.h" : YES 00:02:28.370 Has header "linux/vduse.h" : YES 00:02:28.370 Message: lib/vhost: Defining dependency "vhost" 00:02:28.370 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.370 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.371 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.371 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.371 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:28.371 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:28.371 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:28.371 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:28.371 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:28.371 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:28.371 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.371 Configuring doxy-api-html.conf using configuration 00:02:28.371 Configuring doxy-api-man.conf using configuration 00:02:28.371 Program mandb found: YES (/usr/bin/mandb) 00:02:28.371 Program sphinx-build found: NO 00:02:28.371 Configuring rte_build_config.h using configuration 00:02:28.371 Message: 00:02:28.371 ================= 00:02:28.371 Applications Enabled 00:02:28.371 ================= 00:02:28.371 00:02:28.371 apps: 00:02:28.371 00:02:28.371 00:02:28.371 Message: 00:02:28.371 ================= 00:02:28.371 Libraries Enabled 00:02:28.371 ================= 00:02:28.371 00:02:28.371 libs: 00:02:28.371 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:28.371 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:28.371 cryptodev, dmadev, power, reorder, security, vhost, 00:02:28.371 00:02:28.371 Message: 00:02:28.371 =============== 00:02:28.371 Drivers Enabled 00:02:28.371 =============== 00:02:28.371 00:02:28.371 common: 00:02:28.371 00:02:28.371 bus: 00:02:28.371 pci, vdev, 00:02:28.371 mempool: 00:02:28.371 ring, 00:02:28.371 dma: 00:02:28.371 00:02:28.371 net: 00:02:28.371 00:02:28.371 crypto: 00:02:28.371 00:02:28.371 compress: 00:02:28.371 00:02:28.371 vdpa: 00:02:28.371 00:02:28.371 00:02:28.371 Message: 00:02:28.371 ================= 00:02:28.371 Content Skipped 00:02:28.371 ================= 00:02:28.371 00:02:28.371 apps: 00:02:28.371 dumpcap: explicitly disabled via build config 00:02:28.371 graph: explicitly disabled via build config 00:02:28.371 pdump: explicitly disabled via build config 00:02:28.371 proc-info: explicitly disabled via build config 00:02:28.371 test-acl: explicitly disabled via build config 00:02:28.371 test-bbdev: explicitly disabled via build config 00:02:28.371 test-cmdline: explicitly disabled via build config 00:02:28.371 test-compress-perf: explicitly disabled via build config 00:02:28.371 test-crypto-perf: explicitly disabled via build config 00:02:28.371 test-dma-perf: explicitly disabled via build config 00:02:28.371 test-eventdev: explicitly disabled via build config 00:02:28.371 test-fib: explicitly disabled via build config 00:02:28.371 test-flow-perf: explicitly disabled via build config 00:02:28.371 test-gpudev: explicitly disabled via build config 00:02:28.371 test-mldev: explicitly disabled via build config 00:02:28.371 test-pipeline: explicitly disabled via build config 00:02:28.371 test-pmd: explicitly disabled via build config 00:02:28.371 test-regex: explicitly disabled via build config 00:02:28.371 test-sad: explicitly disabled via build config 00:02:28.371 test-security-perf: explicitly disabled via build config 00:02:28.371 00:02:28.371 libs: 00:02:28.371 argparse: explicitly disabled via build config 00:02:28.371 metrics: explicitly disabled via build config 00:02:28.371 acl: explicitly disabled via build config 00:02:28.371 bbdev: explicitly disabled via build config 00:02:28.371 bitratestats: explicitly disabled via build config 00:02:28.371 bpf: explicitly disabled via build config 00:02:28.371 cfgfile: explicitly disabled via build config 00:02:28.371 distributor: explicitly disabled via build config 00:02:28.371 efd: explicitly disabled via build config 00:02:28.371 eventdev: explicitly disabled via build config 00:02:28.371 dispatcher: explicitly disabled via build config 00:02:28.371 gpudev: explicitly disabled via build config 00:02:28.371 gro: explicitly disabled via build config 00:02:28.371 gso: explicitly disabled via build config 00:02:28.371 ip_frag: explicitly disabled via build config 00:02:28.371 jobstats: explicitly disabled via build config 00:02:28.371 latencystats: explicitly disabled via build config 00:02:28.371 lpm: explicitly disabled via build config 00:02:28.371 member: explicitly disabled via build config 00:02:28.371 pcapng: explicitly disabled via build config 00:02:28.371 rawdev: explicitly disabled via build config 00:02:28.371 regexdev: explicitly disabled via build config 00:02:28.371 mldev: explicitly disabled via build config 00:02:28.371 rib: explicitly disabled via build config 00:02:28.371 sched: explicitly disabled via build config 00:02:28.371 stack: explicitly disabled via build config 00:02:28.371 ipsec: explicitly disabled via build config 00:02:28.371 pdcp: explicitly disabled via build config 00:02:28.371 fib: explicitly disabled via build config 00:02:28.371 port: explicitly disabled via build config 00:02:28.371 pdump: explicitly disabled via build config 00:02:28.371 table: explicitly disabled via build config 00:02:28.371 pipeline: explicitly disabled via build config 00:02:28.371 graph: explicitly disabled via build config 00:02:28.371 node: explicitly disabled via build config 00:02:28.371 00:02:28.371 drivers: 00:02:28.371 common/cpt: not in enabled drivers build config 00:02:28.371 common/dpaax: not in enabled drivers build config 00:02:28.371 common/iavf: not in enabled drivers build config 00:02:28.371 common/idpf: not in enabled drivers build config 00:02:28.371 common/ionic: not in enabled drivers build config 00:02:28.371 common/mvep: not in enabled drivers build config 00:02:28.371 common/octeontx: not in enabled drivers build config 00:02:28.371 bus/auxiliary: not in enabled drivers build config 00:02:28.371 bus/cdx: not in enabled drivers build config 00:02:28.371 bus/dpaa: not in enabled drivers build config 00:02:28.371 bus/fslmc: not in enabled drivers build config 00:02:28.371 bus/ifpga: not in enabled drivers build config 00:02:28.371 bus/platform: not in enabled drivers build config 00:02:28.371 bus/uacce: not in enabled drivers build config 00:02:28.371 bus/vmbus: not in enabled drivers build config 00:02:28.371 common/cnxk: not in enabled drivers build config 00:02:28.371 common/mlx5: not in enabled drivers build config 00:02:28.371 common/nfp: not in enabled drivers build config 00:02:28.371 common/nitrox: not in enabled drivers build config 00:02:28.371 common/qat: not in enabled drivers build config 00:02:28.371 common/sfc_efx: not in enabled drivers build config 00:02:28.371 mempool/bucket: not in enabled drivers build config 00:02:28.371 mempool/cnxk: not in enabled drivers build config 00:02:28.371 mempool/dpaa: not in enabled drivers build config 00:02:28.371 mempool/dpaa2: not in enabled drivers build config 00:02:28.371 mempool/octeontx: not in enabled drivers build config 00:02:28.371 mempool/stack: not in enabled drivers build config 00:02:28.371 dma/cnxk: not in enabled drivers build config 00:02:28.372 dma/dpaa: not in enabled drivers build config 00:02:28.372 dma/dpaa2: not in enabled drivers build config 00:02:28.372 dma/hisilicon: not in enabled drivers build config 00:02:28.372 dma/idxd: not in enabled drivers build config 00:02:28.372 dma/ioat: not in enabled drivers build config 00:02:28.372 dma/skeleton: not in enabled drivers build config 00:02:28.372 net/af_packet: not in enabled drivers build config 00:02:28.372 net/af_xdp: not in enabled drivers build config 00:02:28.372 net/ark: not in enabled drivers build config 00:02:28.372 net/atlantic: not in enabled drivers build config 00:02:28.372 net/avp: not in enabled drivers build config 00:02:28.372 net/axgbe: not in enabled drivers build config 00:02:28.372 net/bnx2x: not in enabled drivers build config 00:02:28.372 net/bnxt: not in enabled drivers build config 00:02:28.372 net/bonding: not in enabled drivers build config 00:02:28.372 net/cnxk: not in enabled drivers build config 00:02:28.372 net/cpfl: not in enabled drivers build config 00:02:28.372 net/cxgbe: not in enabled drivers build config 00:02:28.372 net/dpaa: not in enabled drivers build config 00:02:28.372 net/dpaa2: not in enabled drivers build config 00:02:28.372 net/e1000: not in enabled drivers build config 00:02:28.372 net/ena: not in enabled drivers build config 00:02:28.372 net/enetc: not in enabled drivers build config 00:02:28.372 net/enetfec: not in enabled drivers build config 00:02:28.372 net/enic: not in enabled drivers build config 00:02:28.372 net/failsafe: not in enabled drivers build config 00:02:28.372 net/fm10k: not in enabled drivers build config 00:02:28.372 net/gve: not in enabled drivers build config 00:02:28.372 net/hinic: not in enabled drivers build config 00:02:28.372 net/hns3: not in enabled drivers build config 00:02:28.372 net/i40e: not in enabled drivers build config 00:02:28.372 net/iavf: not in enabled drivers build config 00:02:28.372 net/ice: not in enabled drivers build config 00:02:28.372 net/idpf: not in enabled drivers build config 00:02:28.372 net/igc: not in enabled drivers build config 00:02:28.372 net/ionic: not in enabled drivers build config 00:02:28.372 net/ipn3ke: not in enabled drivers build config 00:02:28.372 net/ixgbe: not in enabled drivers build config 00:02:28.372 net/mana: not in enabled drivers build config 00:02:28.372 net/memif: not in enabled drivers build config 00:02:28.372 net/mlx4: not in enabled drivers build config 00:02:28.372 net/mlx5: not in enabled drivers build config 00:02:28.372 net/mvneta: not in enabled drivers build config 00:02:28.372 net/mvpp2: not in enabled drivers build config 00:02:28.372 net/netvsc: not in enabled drivers build config 00:02:28.372 net/nfb: not in enabled drivers build config 00:02:28.372 net/nfp: not in enabled drivers build config 00:02:28.372 net/ngbe: not in enabled drivers build config 00:02:28.372 net/null: not in enabled drivers build config 00:02:28.372 net/octeontx: not in enabled drivers build config 00:02:28.372 net/octeon_ep: not in enabled drivers build config 00:02:28.372 net/pcap: not in enabled drivers build config 00:02:28.372 net/pfe: not in enabled drivers build config 00:02:28.372 net/qede: not in enabled drivers build config 00:02:28.372 net/ring: not in enabled drivers build config 00:02:28.372 net/sfc: not in enabled drivers build config 00:02:28.372 net/softnic: not in enabled drivers build config 00:02:28.372 net/tap: not in enabled drivers build config 00:02:28.372 net/thunderx: not in enabled drivers build config 00:02:28.372 net/txgbe: not in enabled drivers build config 00:02:28.372 net/vdev_netvsc: not in enabled drivers build config 00:02:28.372 net/vhost: not in enabled drivers build config 00:02:28.372 net/virtio: not in enabled drivers build config 00:02:28.372 net/vmxnet3: not in enabled drivers build config 00:02:28.372 raw/*: missing internal dependency, "rawdev" 00:02:28.372 crypto/armv8: not in enabled drivers build config 00:02:28.372 crypto/bcmfs: not in enabled drivers build config 00:02:28.372 crypto/caam_jr: not in enabled drivers build config 00:02:28.372 crypto/ccp: not in enabled drivers build config 00:02:28.372 crypto/cnxk: not in enabled drivers build config 00:02:28.372 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.372 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.372 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.372 crypto/mlx5: not in enabled drivers build config 00:02:28.372 crypto/mvsam: not in enabled drivers build config 00:02:28.372 crypto/nitrox: not in enabled drivers build config 00:02:28.372 crypto/null: not in enabled drivers build config 00:02:28.372 crypto/octeontx: not in enabled drivers build config 00:02:28.372 crypto/openssl: not in enabled drivers build config 00:02:28.372 crypto/scheduler: not in enabled drivers build config 00:02:28.372 crypto/uadk: not in enabled drivers build config 00:02:28.372 crypto/virtio: not in enabled drivers build config 00:02:28.372 compress/isal: not in enabled drivers build config 00:02:28.372 compress/mlx5: not in enabled drivers build config 00:02:28.372 compress/nitrox: not in enabled drivers build config 00:02:28.372 compress/octeontx: not in enabled drivers build config 00:02:28.372 compress/zlib: not in enabled drivers build config 00:02:28.372 regex/*: missing internal dependency, "regexdev" 00:02:28.372 ml/*: missing internal dependency, "mldev" 00:02:28.372 vdpa/ifc: not in enabled drivers build config 00:02:28.372 vdpa/mlx5: not in enabled drivers build config 00:02:28.372 vdpa/nfp: not in enabled drivers build config 00:02:28.372 vdpa/sfc: not in enabled drivers build config 00:02:28.372 event/*: missing internal dependency, "eventdev" 00:02:28.372 baseband/*: missing internal dependency, "bbdev" 00:02:28.372 gpu/*: missing internal dependency, "gpudev" 00:02:28.372 00:02:28.372 00:02:28.372 Build targets in project: 85 00:02:28.372 00:02:28.372 DPDK 24.03.0 00:02:28.372 00:02:28.372 User defined options 00:02:28.372 buildtype : debug 00:02:28.372 default_library : shared 00:02:28.372 libdir : lib 00:02:28.372 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.372 b_sanitize : address 00:02:28.372 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:28.372 c_link_args : 00:02:28.372 cpu_instruction_set: native 00:02:28.372 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:28.372 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:28.372 enable_docs : false 00:02:28.372 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:28.372 enable_kmods : false 00:02:28.372 max_lcores : 128 00:02:28.372 tests : false 00:02:28.372 00:02:28.372 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.631 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:28.890 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:28.890 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.890 [3/268] Linking static target lib/librte_kvargs.a 00:02:28.890 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:28.890 [5/268] Linking static target lib/librte_log.a 00:02:28.890 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.456 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.456 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.456 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.715 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.715 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.715 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.715 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.715 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.973 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.973 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.973 [17/268] Linking static target lib/librte_telemetry.a 00:02:29.973 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.973 [19/268] Linking target lib/librte_log.so.24.1 00:02:29.973 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.231 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.231 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:30.489 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.489 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.489 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.489 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.746 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.746 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.746 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.746 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.746 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.746 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:30.746 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.003 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.003 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.260 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.260 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.518 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.518 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.518 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.518 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.518 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.518 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.776 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.034 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.034 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.034 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.034 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.292 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.550 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.550 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.550 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.550 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:32.809 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:32.809 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:32.809 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.068 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.068 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.068 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.327 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.327 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.327 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.327 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.327 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.586 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.586 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.844 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.103 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.103 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.103 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.362 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.362 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.362 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.362 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.362 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.362 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.362 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.362 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.362 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.620 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:34.877 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.877 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:34.877 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:34.877 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.134 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.134 [86/268] Linking static target lib/librte_eal.a 00:02:35.135 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.392 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.392 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.392 [90/268] Linking static target lib/librte_ring.a 00:02:35.392 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.392 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.392 [93/268] Linking static target lib/librte_rcu.a 00:02:35.392 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.650 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.650 [96/268] Linking static target lib/librte_mempool.a 00:02:35.650 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:35.650 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:35.908 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.908 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.908 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.165 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.165 [103/268] Linking static target lib/librte_mbuf.a 00:02:36.165 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:36.165 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.423 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:36.680 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.680 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:36.938 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:36.938 [110/268] Linking static target lib/librte_meter.a 00:02:36.938 [111/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.938 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.938 [113/268] Linking static target lib/librte_net.a 00:02:36.938 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.196 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.196 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.454 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.454 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.712 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:37.712 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.034 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.035 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.293 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.551 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.551 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.551 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.551 [127/268] Linking static target lib/librte_pci.a 00:02:38.809 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.809 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.809 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.809 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.809 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.809 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.067 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.067 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.067 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.067 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.067 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.067 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.067 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.325 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.325 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.325 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.325 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.325 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.584 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.842 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.842 [148/268] Linking static target lib/librte_cmdline.a 00:02:39.842 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.111 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.111 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.111 [152/268] Linking static target lib/librte_timer.a 00:02:40.111 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.111 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.369 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.369 [156/268] Linking static target lib/librte_ethdev.a 00:02:40.369 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.627 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.628 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.628 [160/268] Linking static target lib/librte_hash.a 00:02:40.887 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.887 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.887 [163/268] Linking static target lib/librte_compressdev.a 00:02:40.887 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.887 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.145 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.403 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.403 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.403 [169/268] Linking static target lib/librte_dmadev.a 00:02:41.403 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.662 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.662 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.662 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.921 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.921 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.179 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.179 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.179 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.437 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.438 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.438 [181/268] Linking static target lib/librte_cryptodev.a 00:02:42.438 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.438 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.438 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.005 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.005 [186/268] Linking static target lib/librte_power.a 00:02:43.005 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.005 [188/268] Linking static target lib/librte_reorder.a 00:02:43.005 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.264 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.264 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.264 [192/268] Linking static target lib/librte_security.a 00:02:43.264 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.831 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.831 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.090 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.090 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.349 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.349 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.607 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.607 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.865 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.122 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.122 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.122 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.122 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.380 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.638 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.638 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.638 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.638 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.896 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.896 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.896 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.896 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.896 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:45.896 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.896 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.896 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:45.896 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.896 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.154 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:46.154 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.154 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.154 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:46.154 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.412 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.347 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.347 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.347 [230/268] Linking target lib/librte_eal.so.24.1 00:02:47.347 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.347 [232/268] Linking target lib/librte_ring.so.24.1 00:02:47.347 [233/268] Linking target lib/librte_meter.so.24.1 00:02:47.347 [234/268] Linking target lib/librte_pci.so.24.1 00:02:47.347 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.347 [236/268] Linking target lib/librte_timer.so.24.1 00:02:47.347 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.605 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.605 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.605 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.605 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:47.605 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:47.605 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.605 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.605 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.605 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.605 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.863 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:47.863 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:47.863 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:47.863 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:47.863 [252/268] Linking target lib/librte_net.so.24.1 00:02:47.863 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:47.863 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:48.120 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.120 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.120 [257/268] Linking target lib/librte_hash.so.24.1 00:02:48.120 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:48.120 [259/268] Linking target lib/librte_security.so.24.1 00:02:48.120 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.685 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.685 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:48.943 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.943 [264/268] Linking target lib/librte_power.so.24.1 00:02:51.487 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.487 [266/268] Linking static target lib/librte_vhost.a 00:02:52.893 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.151 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:53.151 INFO: autodetecting backend as ninja 00:02:53.151 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:15.078 CC lib/log/log_flags.o 00:03:15.078 CC lib/log/log.o 00:03:15.078 CC lib/log/log_deprecated.o 00:03:15.078 CC lib/ut_mock/mock.o 00:03:15.078 CC lib/ut/ut.o 00:03:15.078 LIB libspdk_ut_mock.a 00:03:15.078 LIB libspdk_log.a 00:03:15.078 LIB libspdk_ut.a 00:03:15.078 SO libspdk_ut.so.2.0 00:03:15.078 SO libspdk_ut_mock.so.6.0 00:03:15.078 SO libspdk_log.so.7.1 00:03:15.078 SYMLINK libspdk_ut_mock.so 00:03:15.078 SYMLINK libspdk_ut.so 00:03:15.078 SYMLINK libspdk_log.so 00:03:15.078 CC lib/util/base64.o 00:03:15.078 CC lib/util/bit_array.o 00:03:15.078 CC lib/dma/dma.o 00:03:15.078 CC lib/util/crc16.o 00:03:15.078 CC lib/util/cpuset.o 00:03:15.078 CC lib/util/crc32.o 00:03:15.078 CC lib/util/crc32c.o 00:03:15.078 CXX lib/trace_parser/trace.o 00:03:15.078 CC lib/ioat/ioat.o 00:03:15.078 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.078 CC lib/util/crc32_ieee.o 00:03:15.078 CC lib/util/crc64.o 00:03:15.078 CC lib/vfio_user/host/vfio_user.o 00:03:15.078 CC lib/util/dif.o 00:03:15.078 LIB libspdk_dma.a 00:03:15.078 CC lib/util/fd.o 00:03:15.078 SO libspdk_dma.so.5.0 00:03:15.078 CC lib/util/fd_group.o 00:03:15.078 CC lib/util/file.o 00:03:15.078 CC lib/util/hexlify.o 00:03:15.335 SYMLINK libspdk_dma.so 00:03:15.335 CC lib/util/iov.o 00:03:15.335 CC lib/util/math.o 00:03:15.335 LIB libspdk_ioat.a 00:03:15.335 SO libspdk_ioat.so.7.0 00:03:15.335 CC lib/util/net.o 00:03:15.335 CC lib/util/pipe.o 00:03:15.335 SYMLINK libspdk_ioat.so 00:03:15.335 CC lib/util/strerror_tls.o 00:03:15.335 CC lib/util/string.o 00:03:15.335 LIB libspdk_vfio_user.a 00:03:15.335 SO libspdk_vfio_user.so.5.0 00:03:15.335 CC lib/util/uuid.o 00:03:15.335 CC lib/util/xor.o 00:03:15.592 CC lib/util/zipf.o 00:03:15.592 CC lib/util/md5.o 00:03:15.592 SYMLINK libspdk_vfio_user.so 00:03:15.849 LIB libspdk_util.a 00:03:16.106 SO libspdk_util.so.10.1 00:03:16.106 LIB libspdk_trace_parser.a 00:03:16.106 SYMLINK libspdk_util.so 00:03:16.106 SO libspdk_trace_parser.so.6.0 00:03:16.363 SYMLINK libspdk_trace_parser.so 00:03:16.363 CC lib/conf/conf.o 00:03:16.363 CC lib/vmd/vmd.o 00:03:16.363 CC lib/vmd/led.o 00:03:16.363 CC lib/json/json_parse.o 00:03:16.363 CC lib/json/json_util.o 00:03:16.363 CC lib/json/json_write.o 00:03:16.363 CC lib/env_dpdk/env.o 00:03:16.363 CC lib/env_dpdk/memory.o 00:03:16.363 CC lib/idxd/idxd.o 00:03:16.363 CC lib/rdma_utils/rdma_utils.o 00:03:16.621 CC lib/env_dpdk/pci.o 00:03:16.621 LIB libspdk_conf.a 00:03:16.621 CC lib/env_dpdk/init.o 00:03:16.621 SO libspdk_conf.so.6.0 00:03:16.621 LIB libspdk_rdma_utils.a 00:03:16.621 SYMLINK libspdk_conf.so 00:03:16.621 LIB libspdk_json.a 00:03:16.877 CC lib/idxd/idxd_user.o 00:03:16.877 CC lib/env_dpdk/threads.o 00:03:16.877 SO libspdk_rdma_utils.so.1.0 00:03:16.878 SO libspdk_json.so.6.0 00:03:16.878 SYMLINK libspdk_rdma_utils.so 00:03:16.878 CC lib/idxd/idxd_kernel.o 00:03:16.878 SYMLINK libspdk_json.so 00:03:16.878 CC lib/env_dpdk/pci_ioat.o 00:03:17.135 CC lib/env_dpdk/pci_virtio.o 00:03:17.135 CC lib/env_dpdk/pci_vmd.o 00:03:17.135 CC lib/rdma_provider/common.o 00:03:17.135 CC lib/env_dpdk/pci_idxd.o 00:03:17.135 CC lib/env_dpdk/pci_event.o 00:03:17.135 CC lib/env_dpdk/sigbus_handler.o 00:03:17.135 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.135 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.135 LIB libspdk_idxd.a 00:03:17.135 CC lib/env_dpdk/pci_dpdk.o 00:03:17.391 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.391 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.391 SO libspdk_idxd.so.12.1 00:03:17.391 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.391 LIB libspdk_vmd.a 00:03:17.391 SO libspdk_vmd.so.6.0 00:03:17.391 SYMLINK libspdk_idxd.so 00:03:17.391 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.391 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.391 SYMLINK libspdk_vmd.so 00:03:17.391 LIB libspdk_rdma_provider.a 00:03:17.391 SO libspdk_rdma_provider.so.7.0 00:03:17.649 SYMLINK libspdk_rdma_provider.so 00:03:17.649 LIB libspdk_jsonrpc.a 00:03:17.649 SO libspdk_jsonrpc.so.6.0 00:03:17.649 SYMLINK libspdk_jsonrpc.so 00:03:17.907 CC lib/rpc/rpc.o 00:03:18.166 LIB libspdk_rpc.a 00:03:18.425 SO libspdk_rpc.so.6.0 00:03:18.425 LIB libspdk_env_dpdk.a 00:03:18.425 SYMLINK libspdk_rpc.so 00:03:18.425 SO libspdk_env_dpdk.so.15.1 00:03:18.684 SYMLINK libspdk_env_dpdk.so 00:03:18.684 CC lib/notify/notify.o 00:03:18.684 CC lib/notify/notify_rpc.o 00:03:18.684 CC lib/keyring/keyring.o 00:03:18.684 CC lib/keyring/keyring_rpc.o 00:03:18.684 CC lib/trace/trace.o 00:03:18.684 CC lib/trace/trace_rpc.o 00:03:18.684 CC lib/trace/trace_flags.o 00:03:18.684 LIB libspdk_notify.a 00:03:18.943 SO libspdk_notify.so.6.0 00:03:18.943 LIB libspdk_keyring.a 00:03:18.943 SYMLINK libspdk_notify.so 00:03:18.943 SO libspdk_keyring.so.2.0 00:03:18.943 LIB libspdk_trace.a 00:03:18.943 SO libspdk_trace.so.11.0 00:03:18.943 SYMLINK libspdk_keyring.so 00:03:18.943 SYMLINK libspdk_trace.so 00:03:19.201 CC lib/sock/sock.o 00:03:19.201 CC lib/sock/sock_rpc.o 00:03:19.201 CC lib/thread/thread.o 00:03:19.201 CC lib/thread/iobuf.o 00:03:19.769 LIB libspdk_sock.a 00:03:19.769 SO libspdk_sock.so.10.0 00:03:19.769 SYMLINK libspdk_sock.so 00:03:20.027 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.027 CC lib/nvme/nvme_ctrlr.o 00:03:20.027 CC lib/nvme/nvme_fabric.o 00:03:20.027 CC lib/nvme/nvme_ns_cmd.o 00:03:20.285 CC lib/nvme/nvme_ns.o 00:03:20.285 CC lib/nvme/nvme_pcie_common.o 00:03:20.285 CC lib/nvme/nvme_pcie.o 00:03:20.285 CC lib/nvme/nvme_qpair.o 00:03:20.285 CC lib/nvme/nvme.o 00:03:21.220 CC lib/nvme/nvme_quirks.o 00:03:21.220 CC lib/nvme/nvme_transport.o 00:03:21.220 CC lib/nvme/nvme_discovery.o 00:03:21.220 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.220 LIB libspdk_thread.a 00:03:21.220 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.220 CC lib/nvme/nvme_tcp.o 00:03:21.220 SO libspdk_thread.so.11.0 00:03:21.478 CC lib/nvme/nvme_opal.o 00:03:21.478 SYMLINK libspdk_thread.so 00:03:21.478 CC lib/nvme/nvme_io_msg.o 00:03:21.736 CC lib/nvme/nvme_poll_group.o 00:03:21.736 CC lib/accel/accel.o 00:03:21.736 CC lib/accel/accel_rpc.o 00:03:21.993 CC lib/nvme/nvme_zns.o 00:03:21.993 CC lib/accel/accel_sw.o 00:03:21.993 CC lib/nvme/nvme_stubs.o 00:03:21.993 CC lib/nvme/nvme_auth.o 00:03:22.250 CC lib/nvme/nvme_cuse.o 00:03:22.250 CC lib/blob/blobstore.o 00:03:22.250 CC lib/blob/request.o 00:03:22.508 CC lib/blob/zeroes.o 00:03:22.508 CC lib/blob/blob_bs_dev.o 00:03:22.508 CC lib/nvme/nvme_vfio_user.o 00:03:22.508 CC lib/nvme/nvme_rdma.o 00:03:23.090 CC lib/init/json_config.o 00:03:23.090 CC lib/virtio/virtio.o 00:03:23.348 CC lib/init/subsystem.o 00:03:23.348 CC lib/init/subsystem_rpc.o 00:03:23.348 CC lib/init/rpc.o 00:03:23.348 LIB libspdk_accel.a 00:03:23.348 CC lib/virtio/virtio_vhost_user.o 00:03:23.348 CC lib/vfu_tgt/tgt_endpoint.o 00:03:23.348 CC lib/fsdev/fsdev.o 00:03:23.348 SO libspdk_accel.so.16.0 00:03:23.348 CC lib/fsdev/fsdev_io.o 00:03:23.348 CC lib/fsdev/fsdev_rpc.o 00:03:23.348 CC lib/vfu_tgt/tgt_rpc.o 00:03:23.348 SYMLINK libspdk_accel.so 00:03:23.348 LIB libspdk_init.a 00:03:23.607 SO libspdk_init.so.6.0 00:03:23.607 SYMLINK libspdk_init.so 00:03:23.607 CC lib/virtio/virtio_vfio_user.o 00:03:23.607 CC lib/virtio/virtio_pci.o 00:03:23.607 CC lib/bdev/bdev.o 00:03:23.866 CC lib/bdev/bdev_rpc.o 00:03:23.866 LIB libspdk_vfu_tgt.a 00:03:23.866 CC lib/event/app.o 00:03:23.866 SO libspdk_vfu_tgt.so.3.0 00:03:23.866 CC lib/bdev/bdev_zone.o 00:03:23.866 SYMLINK libspdk_vfu_tgt.so 00:03:23.866 CC lib/bdev/part.o 00:03:23.866 CC lib/event/reactor.o 00:03:23.866 LIB libspdk_virtio.a 00:03:24.124 SO libspdk_virtio.so.7.0 00:03:24.124 CC lib/event/log_rpc.o 00:03:24.124 SYMLINK libspdk_virtio.so 00:03:24.124 CC lib/bdev/scsi_nvme.o 00:03:24.124 CC lib/event/app_rpc.o 00:03:24.124 LIB libspdk_fsdev.a 00:03:24.124 SO libspdk_fsdev.so.2.0 00:03:24.124 CC lib/event/scheduler_static.o 00:03:24.383 SYMLINK libspdk_fsdev.so 00:03:24.383 LIB libspdk_nvme.a 00:03:24.383 LIB libspdk_event.a 00:03:24.383 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:24.641 SO libspdk_event.so.14.0 00:03:24.641 SO libspdk_nvme.so.15.0 00:03:24.641 SYMLINK libspdk_event.so 00:03:24.900 SYMLINK libspdk_nvme.so 00:03:25.467 LIB libspdk_fuse_dispatcher.a 00:03:25.467 SO libspdk_fuse_dispatcher.so.1.0 00:03:25.467 SYMLINK libspdk_fuse_dispatcher.so 00:03:26.842 LIB libspdk_blob.a 00:03:26.842 SO libspdk_blob.so.12.0 00:03:26.842 SYMLINK libspdk_blob.so 00:03:27.100 CC lib/lvol/lvol.o 00:03:27.100 CC lib/blobfs/blobfs.o 00:03:27.100 CC lib/blobfs/tree.o 00:03:27.359 LIB libspdk_bdev.a 00:03:27.359 SO libspdk_bdev.so.17.0 00:03:27.359 SYMLINK libspdk_bdev.so 00:03:27.616 CC lib/scsi/dev.o 00:03:27.616 CC lib/scsi/lun.o 00:03:27.616 CC lib/scsi/port.o 00:03:27.616 CC lib/nvmf/ctrlr.o 00:03:27.616 CC lib/ftl/ftl_core.o 00:03:27.616 CC lib/nbd/nbd.o 00:03:27.616 CC lib/scsi/scsi.o 00:03:27.616 CC lib/ublk/ublk.o 00:03:27.890 CC lib/ftl/ftl_init.o 00:03:27.890 CC lib/ftl/ftl_layout.o 00:03:27.890 CC lib/ftl/ftl_debug.o 00:03:28.159 LIB libspdk_blobfs.a 00:03:28.159 CC lib/scsi/scsi_bdev.o 00:03:28.159 SO libspdk_blobfs.so.11.0 00:03:28.159 CC lib/ftl/ftl_io.o 00:03:28.159 CC lib/ublk/ublk_rpc.o 00:03:28.159 SYMLINK libspdk_blobfs.so 00:03:28.159 CC lib/scsi/scsi_pr.o 00:03:28.159 CC lib/nbd/nbd_rpc.o 00:03:28.159 CC lib/ftl/ftl_sb.o 00:03:28.417 LIB libspdk_lvol.a 00:03:28.417 CC lib/ftl/ftl_l2p.o 00:03:28.417 SO libspdk_lvol.so.11.0 00:03:28.417 CC lib/scsi/scsi_rpc.o 00:03:28.417 SYMLINK libspdk_lvol.so 00:03:28.417 CC lib/scsi/task.o 00:03:28.417 LIB libspdk_nbd.a 00:03:28.417 SO libspdk_nbd.so.7.0 00:03:28.417 CC lib/ftl/ftl_l2p_flat.o 00:03:28.417 CC lib/ftl/ftl_nv_cache.o 00:03:28.675 SYMLINK libspdk_nbd.so 00:03:28.675 CC lib/ftl/ftl_band.o 00:03:28.675 CC lib/ftl/ftl_band_ops.o 00:03:28.675 CC lib/ftl/ftl_writer.o 00:03:28.675 LIB libspdk_ublk.a 00:03:28.675 SO libspdk_ublk.so.3.0 00:03:28.675 CC lib/ftl/ftl_rq.o 00:03:28.675 CC lib/ftl/ftl_reloc.o 00:03:28.675 SYMLINK libspdk_ublk.so 00:03:28.675 CC lib/ftl/ftl_l2p_cache.o 00:03:28.675 CC lib/ftl/ftl_p2l.o 00:03:28.675 LIB libspdk_scsi.a 00:03:28.934 SO libspdk_scsi.so.9.0 00:03:28.934 CC lib/ftl/ftl_p2l_log.o 00:03:28.934 CC lib/nvmf/ctrlr_discovery.o 00:03:28.934 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.934 SYMLINK libspdk_scsi.so 00:03:28.934 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.192 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.192 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.450 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.450 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.450 CC lib/nvmf/ctrlr_bdev.o 00:03:29.450 CC lib/nvmf/subsystem.o 00:03:29.450 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.450 CC lib/iscsi/conn.o 00:03:29.450 CC lib/vhost/vhost.o 00:03:29.708 CC lib/nvmf/nvmf.o 00:03:29.708 CC lib/nvmf/nvmf_rpc.o 00:03:29.708 CC lib/nvmf/transport.o 00:03:29.708 CC lib/nvmf/tcp.o 00:03:29.966 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.966 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:30.224 CC lib/iscsi/init_grp.o 00:03:30.224 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:30.224 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.482 CC lib/iscsi/iscsi.o 00:03:30.482 CC lib/iscsi/param.o 00:03:30.482 CC lib/iscsi/portal_grp.o 00:03:30.482 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.740 CC lib/vhost/vhost_rpc.o 00:03:30.740 CC lib/vhost/vhost_scsi.o 00:03:30.740 CC lib/vhost/vhost_blk.o 00:03:30.740 CC lib/nvmf/stubs.o 00:03:30.740 CC lib/iscsi/tgt_node.o 00:03:30.998 CC lib/iscsi/iscsi_subsystem.o 00:03:30.998 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.998 CC lib/nvmf/mdns_server.o 00:03:31.256 CC lib/ftl/utils/ftl_conf.o 00:03:31.256 CC lib/ftl/utils/ftl_md.o 00:03:31.514 CC lib/vhost/rte_vhost_user.o 00:03:31.514 CC lib/iscsi/iscsi_rpc.o 00:03:31.514 CC lib/iscsi/task.o 00:03:31.514 CC lib/ftl/utils/ftl_mempool.o 00:03:31.514 CC lib/nvmf/vfio_user.o 00:03:31.773 CC lib/nvmf/rdma.o 00:03:31.773 CC lib/nvmf/auth.o 00:03:31.773 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.773 CC lib/ftl/utils/ftl_property.o 00:03:32.031 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.031 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.031 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.031 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.290 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.290 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.290 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.290 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.290 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.290 LIB libspdk_iscsi.a 00:03:32.290 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.548 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.548 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:32.548 SO libspdk_iscsi.so.8.0 00:03:32.548 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:32.548 CC lib/ftl/base/ftl_base_dev.o 00:03:32.806 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.806 LIB libspdk_vhost.a 00:03:32.806 SYMLINK libspdk_iscsi.so 00:03:32.806 CC lib/ftl/ftl_trace.o 00:03:32.806 SO libspdk_vhost.so.8.0 00:03:32.806 SYMLINK libspdk_vhost.so 00:03:33.065 LIB libspdk_ftl.a 00:03:33.323 SO libspdk_ftl.so.9.0 00:03:33.582 SYMLINK libspdk_ftl.so 00:03:34.958 LIB libspdk_nvmf.a 00:03:34.958 SO libspdk_nvmf.so.20.0 00:03:35.216 SYMLINK libspdk_nvmf.so 00:03:35.474 CC module/env_dpdk/env_dpdk_rpc.o 00:03:35.474 CC module/vfu_device/vfu_virtio.o 00:03:35.732 CC module/sock/posix/posix.o 00:03:35.732 CC module/blob/bdev/blob_bdev.o 00:03:35.732 CC module/sock/uring/uring.o 00:03:35.732 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:35.732 CC module/keyring/file/keyring.o 00:03:35.732 CC module/accel/ioat/accel_ioat.o 00:03:35.732 CC module/accel/error/accel_error.o 00:03:35.732 CC module/fsdev/aio/fsdev_aio.o 00:03:35.732 LIB libspdk_env_dpdk_rpc.a 00:03:35.732 SO libspdk_env_dpdk_rpc.so.6.0 00:03:35.732 SYMLINK libspdk_env_dpdk_rpc.so 00:03:35.732 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:35.732 CC module/keyring/file/keyring_rpc.o 00:03:35.991 CC module/accel/ioat/accel_ioat_rpc.o 00:03:35.991 LIB libspdk_scheduler_dynamic.a 00:03:35.991 CC module/accel/error/accel_error_rpc.o 00:03:35.991 SO libspdk_scheduler_dynamic.so.4.0 00:03:35.991 LIB libspdk_blob_bdev.a 00:03:35.991 CC module/fsdev/aio/linux_aio_mgr.o 00:03:35.991 LIB libspdk_keyring_file.a 00:03:35.991 SYMLINK libspdk_scheduler_dynamic.so 00:03:35.991 SO libspdk_blob_bdev.so.12.0 00:03:35.991 SO libspdk_keyring_file.so.2.0 00:03:35.991 LIB libspdk_accel_ioat.a 00:03:35.991 LIB libspdk_accel_error.a 00:03:35.991 SO libspdk_accel_ioat.so.6.0 00:03:36.249 SYMLINK libspdk_blob_bdev.so 00:03:36.249 SO libspdk_accel_error.so.2.0 00:03:36.249 SYMLINK libspdk_keyring_file.so 00:03:36.249 CC module/vfu_device/vfu_virtio_blk.o 00:03:36.249 SYMLINK libspdk_accel_ioat.so 00:03:36.249 SYMLINK libspdk_accel_error.so 00:03:36.249 CC module/vfu_device/vfu_virtio_scsi.o 00:03:36.249 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.249 CC module/keyring/linux/keyring.o 00:03:36.508 CC module/accel/dsa/accel_dsa.o 00:03:36.508 CC module/accel/iaa/accel_iaa.o 00:03:36.508 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.508 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.508 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.508 CC module/keyring/linux/keyring_rpc.o 00:03:36.508 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.508 LIB libspdk_fsdev_aio.a 00:03:36.508 SO libspdk_fsdev_aio.so.1.0 00:03:36.508 LIB libspdk_sock_uring.a 00:03:36.508 LIB libspdk_sock_posix.a 00:03:36.766 SO libspdk_sock_uring.so.5.0 00:03:36.766 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.766 LIB libspdk_keyring_linux.a 00:03:36.767 SO libspdk_sock_posix.so.6.0 00:03:36.767 CC module/vfu_device/vfu_virtio_rpc.o 00:03:36.767 SO libspdk_keyring_linux.so.1.0 00:03:36.767 SYMLINK libspdk_fsdev_aio.so 00:03:36.767 SYMLINK libspdk_sock_uring.so 00:03:36.767 LIB libspdk_accel_dsa.a 00:03:36.767 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.767 CC module/bdev/delay/vbdev_delay.o 00:03:36.767 SYMLINK libspdk_keyring_linux.so 00:03:36.767 CC module/vfu_device/vfu_virtio_fs.o 00:03:36.767 SYMLINK libspdk_sock_posix.so 00:03:36.767 SO libspdk_accel_dsa.so.5.0 00:03:36.767 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.767 CC module/bdev/error/vbdev_error.o 00:03:36.767 LIB libspdk_accel_iaa.a 00:03:36.767 SYMLINK libspdk_accel_dsa.so 00:03:36.767 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.767 SO libspdk_accel_iaa.so.3.0 00:03:37.025 CC module/bdev/gpt/gpt.o 00:03:37.025 LIB libspdk_scheduler_gscheduler.a 00:03:37.025 SYMLINK libspdk_accel_iaa.so 00:03:37.025 CC module/bdev/gpt/vbdev_gpt.o 00:03:37.025 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.025 CC module/blobfs/bdev/blobfs_bdev.o 00:03:37.025 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.025 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:37.025 LIB libspdk_vfu_device.a 00:03:37.025 SO libspdk_vfu_device.so.3.0 00:03:37.025 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.025 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:37.025 LIB libspdk_bdev_error.a 00:03:37.284 SO libspdk_bdev_error.so.6.0 00:03:37.284 CC module/bdev/null/bdev_null.o 00:03:37.284 SYMLINK libspdk_vfu_device.so 00:03:37.284 LIB libspdk_bdev_delay.a 00:03:37.284 CC module/bdev/null/bdev_null_rpc.o 00:03:37.284 CC module/bdev/malloc/bdev_malloc.o 00:03:37.284 LIB libspdk_blobfs_bdev.a 00:03:37.284 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.284 SO libspdk_bdev_delay.so.6.0 00:03:37.284 SYMLINK libspdk_bdev_error.so 00:03:37.284 SO libspdk_blobfs_bdev.so.6.0 00:03:37.284 LIB libspdk_bdev_gpt.a 00:03:37.284 SO libspdk_bdev_gpt.so.6.0 00:03:37.284 SYMLINK libspdk_bdev_delay.so 00:03:37.284 SYMLINK libspdk_blobfs_bdev.so 00:03:37.284 SYMLINK libspdk_bdev_gpt.so 00:03:37.284 CC module/bdev/nvme/bdev_nvme.o 00:03:37.543 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.543 CC module/bdev/raid/bdev_raid.o 00:03:37.543 LIB libspdk_bdev_null.a 00:03:37.543 CC module/bdev/split/vbdev_split.o 00:03:37.543 SO libspdk_bdev_null.so.6.0 00:03:37.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.543 CC module/bdev/uring/bdev_uring.o 00:03:37.543 SYMLINK libspdk_bdev_null.so 00:03:37.543 CC module/bdev/uring/bdev_uring_rpc.o 00:03:37.801 LIB libspdk_bdev_malloc.a 00:03:37.801 SO libspdk_bdev_malloc.so.6.0 00:03:37.801 LIB libspdk_bdev_lvol.a 00:03:37.801 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.801 SYMLINK libspdk_bdev_malloc.so 00:03:37.801 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.801 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.801 SO libspdk_bdev_lvol.so.6.0 00:03:37.801 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.060 SYMLINK libspdk_bdev_lvol.so 00:03:38.060 LIB libspdk_bdev_zone_block.a 00:03:38.060 SO libspdk_bdev_zone_block.so.6.0 00:03:38.060 LIB libspdk_bdev_split.a 00:03:38.060 CC module/bdev/aio/bdev_aio.o 00:03:38.060 LIB libspdk_bdev_uring.a 00:03:38.060 SO libspdk_bdev_split.so.6.0 00:03:38.060 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.060 SO libspdk_bdev_uring.so.6.0 00:03:38.060 LIB libspdk_bdev_passthru.a 00:03:38.060 SYMLINK libspdk_bdev_zone_block.so 00:03:38.060 SO libspdk_bdev_passthru.so.6.0 00:03:38.060 SYMLINK libspdk_bdev_split.so 00:03:38.060 CC module/bdev/ftl/bdev_ftl.o 00:03:38.060 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.060 SYMLINK libspdk_bdev_uring.so 00:03:38.060 CC module/bdev/nvme/nvme_rpc.o 00:03:38.318 SYMLINK libspdk_bdev_passthru.so 00:03:38.318 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.318 CC module/bdev/nvme/vbdev_opal.o 00:03:38.318 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.318 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.319 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.319 CC module/bdev/raid/raid0.o 00:03:38.578 LIB libspdk_bdev_aio.a 00:03:38.578 SO libspdk_bdev_aio.so.6.0 00:03:38.578 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:38.578 SYMLINK libspdk_bdev_aio.so 00:03:38.578 CC module/bdev/raid/raid1.o 00:03:38.578 CC module/bdev/raid/concat.o 00:03:38.578 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:38.848 LIB libspdk_bdev_ftl.a 00:03:38.848 LIB libspdk_bdev_iscsi.a 00:03:38.848 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.849 SO libspdk_bdev_ftl.so.6.0 00:03:38.849 SO libspdk_bdev_iscsi.so.6.0 00:03:38.849 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.849 SYMLINK libspdk_bdev_iscsi.so 00:03:38.849 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.849 SYMLINK libspdk_bdev_ftl.so 00:03:38.849 LIB libspdk_bdev_raid.a 00:03:38.849 SO libspdk_bdev_raid.so.6.0 00:03:39.107 SYMLINK libspdk_bdev_raid.so 00:03:39.107 LIB libspdk_bdev_virtio.a 00:03:39.107 SO libspdk_bdev_virtio.so.6.0 00:03:39.107 SYMLINK libspdk_bdev_virtio.so 00:03:41.011 LIB libspdk_bdev_nvme.a 00:03:41.011 SO libspdk_bdev_nvme.so.7.1 00:03:41.011 SYMLINK libspdk_bdev_nvme.so 00:03:41.288 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:41.288 CC module/event/subsystems/sock/sock.o 00:03:41.288 CC module/event/subsystems/vmd/vmd.o 00:03:41.288 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.288 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.288 CC module/event/subsystems/keyring/keyring.o 00:03:41.288 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.288 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.288 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.288 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.569 LIB libspdk_event_vfu_tgt.a 00:03:41.569 LIB libspdk_event_keyring.a 00:03:41.569 LIB libspdk_event_fsdev.a 00:03:41.569 LIB libspdk_event_vhost_blk.a 00:03:41.569 LIB libspdk_event_scheduler.a 00:03:41.569 LIB libspdk_event_sock.a 00:03:41.569 LIB libspdk_event_vmd.a 00:03:41.569 LIB libspdk_event_iobuf.a 00:03:41.569 SO libspdk_event_vfu_tgt.so.3.0 00:03:41.569 SO libspdk_event_keyring.so.1.0 00:03:41.569 SO libspdk_event_fsdev.so.1.0 00:03:41.569 SO libspdk_event_sock.so.5.0 00:03:41.569 SO libspdk_event_vhost_blk.so.3.0 00:03:41.569 SO libspdk_event_scheduler.so.4.0 00:03:41.569 SO libspdk_event_vmd.so.6.0 00:03:41.569 SO libspdk_event_iobuf.so.3.0 00:03:41.569 SYMLINK libspdk_event_keyring.so 00:03:41.569 SYMLINK libspdk_event_vfu_tgt.so 00:03:41.569 SYMLINK libspdk_event_sock.so 00:03:41.569 SYMLINK libspdk_event_fsdev.so 00:03:41.569 SYMLINK libspdk_event_scheduler.so 00:03:41.569 SYMLINK libspdk_event_vhost_blk.so 00:03:41.569 SYMLINK libspdk_event_vmd.so 00:03:41.569 SYMLINK libspdk_event_iobuf.so 00:03:41.841 CC module/event/subsystems/accel/accel.o 00:03:42.100 LIB libspdk_event_accel.a 00:03:42.100 SO libspdk_event_accel.so.6.0 00:03:42.100 SYMLINK libspdk_event_accel.so 00:03:42.358 CC module/event/subsystems/bdev/bdev.o 00:03:42.618 LIB libspdk_event_bdev.a 00:03:42.618 SO libspdk_event_bdev.so.6.0 00:03:42.618 SYMLINK libspdk_event_bdev.so 00:03:42.877 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.877 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.877 CC module/event/subsystems/nbd/nbd.o 00:03:42.877 CC module/event/subsystems/ublk/ublk.o 00:03:42.877 CC module/event/subsystems/scsi/scsi.o 00:03:43.136 LIB libspdk_event_nbd.a 00:03:43.136 LIB libspdk_event_ublk.a 00:03:43.136 LIB libspdk_event_scsi.a 00:03:43.136 SO libspdk_event_ublk.so.3.0 00:03:43.136 SO libspdk_event_nbd.so.6.0 00:03:43.136 SO libspdk_event_scsi.so.6.0 00:03:43.136 SYMLINK libspdk_event_ublk.so 00:03:43.136 SYMLINK libspdk_event_nbd.so 00:03:43.136 LIB libspdk_event_nvmf.a 00:03:43.136 SYMLINK libspdk_event_scsi.so 00:03:43.394 SO libspdk_event_nvmf.so.6.0 00:03:43.394 SYMLINK libspdk_event_nvmf.so 00:03:43.394 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.394 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.653 LIB libspdk_event_vhost_scsi.a 00:03:43.653 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.653 LIB libspdk_event_iscsi.a 00:03:43.653 SO libspdk_event_iscsi.so.6.0 00:03:43.653 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.912 SYMLINK libspdk_event_iscsi.so 00:03:43.912 SO libspdk.so.6.0 00:03:43.912 SYMLINK libspdk.so 00:03:44.171 CC app/spdk_nvme_identify/identify.o 00:03:44.171 CC app/trace_record/trace_record.o 00:03:44.171 CC app/spdk_lspci/spdk_lspci.o 00:03:44.171 CXX app/trace/trace.o 00:03:44.171 CC app/spdk_nvme_perf/perf.o 00:03:44.171 CC app/nvmf_tgt/nvmf_main.o 00:03:44.171 CC app/spdk_tgt/spdk_tgt.o 00:03:44.171 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.430 CC test/thread/poller_perf/poller_perf.o 00:03:44.430 CC examples/util/zipf/zipf.o 00:03:44.430 LINK spdk_lspci 00:03:44.430 LINK poller_perf 00:03:44.430 LINK nvmf_tgt 00:03:44.430 LINK zipf 00:03:44.430 LINK spdk_trace_record 00:03:44.688 LINK iscsi_tgt 00:03:44.688 LINK spdk_tgt 00:03:44.688 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.688 LINK spdk_trace 00:03:44.947 CC app/spdk_top/spdk_top.o 00:03:44.947 CC examples/ioat/perf/perf.o 00:03:44.947 CC test/dma/test_dma/test_dma.o 00:03:44.947 LINK spdk_nvme_discover 00:03:44.947 CC test/app/bdev_svc/bdev_svc.o 00:03:44.947 CC examples/ioat/verify/verify.o 00:03:44.947 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.947 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.205 LINK ioat_perf 00:03:45.205 LINK bdev_svc 00:03:45.205 LINK lsvmd 00:03:45.205 LINK verify 00:03:45.205 CC app/spdk_dd/spdk_dd.o 00:03:45.205 LINK spdk_nvme_identify 00:03:45.463 LINK spdk_nvme_perf 00:03:45.463 CC examples/vmd/led/led.o 00:03:45.463 CC app/fio/nvme/fio_plugin.o 00:03:45.463 LINK test_dma 00:03:45.463 CC app/vhost/vhost.o 00:03:45.463 LINK nvme_fuzz 00:03:45.463 CC app/fio/bdev/fio_plugin.o 00:03:45.719 LINK led 00:03:45.719 CC examples/idxd/perf/perf.o 00:03:45.719 CC test/app/histogram_perf/histogram_perf.o 00:03:45.719 LINK vhost 00:03:45.719 LINK spdk_dd 00:03:45.719 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.719 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.976 LINK histogram_perf 00:03:45.976 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:45.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.976 LINK spdk_top 00:03:45.976 TEST_HEADER include/spdk/accel.h 00:03:45.976 TEST_HEADER include/spdk/accel_module.h 00:03:45.976 TEST_HEADER include/spdk/assert.h 00:03:46.233 TEST_HEADER include/spdk/barrier.h 00:03:46.233 TEST_HEADER include/spdk/base64.h 00:03:46.233 CC test/app/jsoncat/jsoncat.o 00:03:46.233 TEST_HEADER include/spdk/bdev.h 00:03:46.233 TEST_HEADER include/spdk/bdev_module.h 00:03:46.233 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.233 TEST_HEADER include/spdk/bit_array.h 00:03:46.233 TEST_HEADER include/spdk/bit_pool.h 00:03:46.233 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.233 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.233 TEST_HEADER include/spdk/blobfs.h 00:03:46.233 TEST_HEADER include/spdk/blob.h 00:03:46.233 LINK idxd_perf 00:03:46.233 TEST_HEADER include/spdk/conf.h 00:03:46.233 TEST_HEADER include/spdk/config.h 00:03:46.233 TEST_HEADER include/spdk/cpuset.h 00:03:46.233 TEST_HEADER include/spdk/crc16.h 00:03:46.233 TEST_HEADER include/spdk/crc32.h 00:03:46.233 TEST_HEADER include/spdk/crc64.h 00:03:46.233 TEST_HEADER include/spdk/dif.h 00:03:46.233 TEST_HEADER include/spdk/dma.h 00:03:46.233 TEST_HEADER include/spdk/endian.h 00:03:46.233 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.233 TEST_HEADER include/spdk/env.h 00:03:46.233 LINK interrupt_tgt 00:03:46.233 TEST_HEADER include/spdk/event.h 00:03:46.233 TEST_HEADER include/spdk/fd_group.h 00:03:46.233 TEST_HEADER include/spdk/fd.h 00:03:46.233 TEST_HEADER include/spdk/file.h 00:03:46.233 TEST_HEADER include/spdk/fsdev.h 00:03:46.233 TEST_HEADER include/spdk/fsdev_module.h 00:03:46.233 TEST_HEADER include/spdk/ftl.h 00:03:46.233 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:46.233 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.233 TEST_HEADER include/spdk/hexlify.h 00:03:46.233 TEST_HEADER include/spdk/histogram_data.h 00:03:46.233 LINK spdk_bdev 00:03:46.233 TEST_HEADER include/spdk/idxd.h 00:03:46.233 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.233 TEST_HEADER include/spdk/init.h 00:03:46.233 TEST_HEADER include/spdk/ioat.h 00:03:46.233 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.233 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.233 TEST_HEADER include/spdk/json.h 00:03:46.233 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.233 TEST_HEADER include/spdk/keyring.h 00:03:46.233 TEST_HEADER include/spdk/keyring_module.h 00:03:46.233 TEST_HEADER include/spdk/likely.h 00:03:46.233 TEST_HEADER include/spdk/log.h 00:03:46.233 TEST_HEADER include/spdk/lvol.h 00:03:46.233 TEST_HEADER include/spdk/md5.h 00:03:46.233 TEST_HEADER include/spdk/memory.h 00:03:46.233 CC examples/thread/thread/thread_ex.o 00:03:46.233 TEST_HEADER include/spdk/mmio.h 00:03:46.233 TEST_HEADER include/spdk/nbd.h 00:03:46.233 TEST_HEADER include/spdk/net.h 00:03:46.233 TEST_HEADER include/spdk/notify.h 00:03:46.233 TEST_HEADER include/spdk/nvme.h 00:03:46.233 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.233 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.233 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.233 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.233 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.233 LINK spdk_nvme 00:03:46.233 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.233 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.233 TEST_HEADER include/spdk/nvmf.h 00:03:46.233 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.233 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.233 TEST_HEADER include/spdk/opal.h 00:03:46.233 TEST_HEADER include/spdk/opal_spec.h 00:03:46.233 TEST_HEADER include/spdk/pci_ids.h 00:03:46.233 TEST_HEADER include/spdk/pipe.h 00:03:46.233 TEST_HEADER include/spdk/queue.h 00:03:46.233 TEST_HEADER include/spdk/reduce.h 00:03:46.233 TEST_HEADER include/spdk/rpc.h 00:03:46.233 TEST_HEADER include/spdk/scheduler.h 00:03:46.233 LINK jsoncat 00:03:46.233 TEST_HEADER include/spdk/scsi.h 00:03:46.233 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.233 TEST_HEADER include/spdk/sock.h 00:03:46.233 TEST_HEADER include/spdk/stdinc.h 00:03:46.233 TEST_HEADER include/spdk/string.h 00:03:46.233 TEST_HEADER include/spdk/thread.h 00:03:46.233 TEST_HEADER include/spdk/trace.h 00:03:46.233 TEST_HEADER include/spdk/trace_parser.h 00:03:46.233 TEST_HEADER include/spdk/tree.h 00:03:46.233 TEST_HEADER include/spdk/ublk.h 00:03:46.233 TEST_HEADER include/spdk/util.h 00:03:46.233 TEST_HEADER include/spdk/uuid.h 00:03:46.233 TEST_HEADER include/spdk/version.h 00:03:46.233 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.233 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.233 TEST_HEADER include/spdk/vhost.h 00:03:46.233 TEST_HEADER include/spdk/vmd.h 00:03:46.233 TEST_HEADER include/spdk/xor.h 00:03:46.233 TEST_HEADER include/spdk/zipf.h 00:03:46.233 CXX test/cpp_headers/accel.o 00:03:46.489 CC test/app/stub/stub.o 00:03:46.489 CC test/env/mem_callbacks/mem_callbacks.o 00:03:46.489 CC examples/sock/hello_world/hello_sock.o 00:03:46.489 LINK thread 00:03:46.489 LINK vhost_fuzz 00:03:46.489 CC test/rpc_client/rpc_client_test.o 00:03:46.489 CXX test/cpp_headers/accel_module.o 00:03:46.489 CC test/event/event_perf/event_perf.o 00:03:46.489 CC test/nvme/aer/aer.o 00:03:46.747 LINK stub 00:03:46.747 LINK event_perf 00:03:46.747 CXX test/cpp_headers/assert.o 00:03:46.747 LINK rpc_client_test 00:03:46.747 CC test/event/reactor/reactor.o 00:03:46.747 LINK hello_sock 00:03:46.747 CC test/event/reactor_perf/reactor_perf.o 00:03:47.004 CC test/event/app_repeat/app_repeat.o 00:03:47.004 CXX test/cpp_headers/barrier.o 00:03:47.004 CC test/env/vtophys/vtophys.o 00:03:47.004 LINK aer 00:03:47.004 LINK reactor 00:03:47.004 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:47.004 LINK reactor_perf 00:03:47.004 LINK app_repeat 00:03:47.004 CXX test/cpp_headers/base64.o 00:03:47.004 LINK vtophys 00:03:47.004 LINK mem_callbacks 00:03:47.004 CC examples/accel/perf/accel_perf.o 00:03:47.263 LINK env_dpdk_post_init 00:03:47.263 CC test/nvme/reset/reset.o 00:03:47.263 CC test/nvme/sgl/sgl.o 00:03:47.263 CXX test/cpp_headers/bdev.o 00:03:47.263 CC examples/blob/hello_world/hello_blob.o 00:03:47.521 CC test/event/scheduler/scheduler.o 00:03:47.521 CC examples/blob/cli/blobcli.o 00:03:47.521 CC test/env/memory/memory_ut.o 00:03:47.521 CXX test/cpp_headers/bdev_module.o 00:03:47.521 LINK reset 00:03:47.521 CC test/accel/dif/dif.o 00:03:47.521 LINK sgl 00:03:47.521 LINK hello_blob 00:03:47.778 CXX test/cpp_headers/bdev_zone.o 00:03:47.778 LINK scheduler 00:03:47.778 LINK accel_perf 00:03:47.778 CC test/nvme/e2edp/nvme_dp.o 00:03:47.778 CC test/nvme/overhead/overhead.o 00:03:47.778 CXX test/cpp_headers/bit_array.o 00:03:47.778 CC test/nvme/err_injection/err_injection.o 00:03:48.037 CXX test/cpp_headers/bit_pool.o 00:03:48.037 CC test/env/pci/pci_ut.o 00:03:48.037 LINK iscsi_fuzz 00:03:48.037 LINK blobcli 00:03:48.037 CXX test/cpp_headers/blob_bdev.o 00:03:48.037 LINK err_injection 00:03:48.295 LINK nvme_dp 00:03:48.295 LINK overhead 00:03:48.295 CC test/blobfs/mkfs/mkfs.o 00:03:48.295 CXX test/cpp_headers/blobfs_bdev.o 00:03:48.295 CC test/nvme/startup/startup.o 00:03:48.554 CC test/nvme/reserve/reserve.o 00:03:48.554 LINK dif 00:03:48.554 CC test/nvme/simple_copy/simple_copy.o 00:03:48.554 CC examples/nvme/hello_world/hello_world.o 00:03:48.554 LINK mkfs 00:03:48.554 CC test/lvol/esnap/esnap.o 00:03:48.554 LINK pci_ut 00:03:48.554 CXX test/cpp_headers/blobfs.o 00:03:48.554 LINK startup 00:03:48.554 CXX test/cpp_headers/blob.o 00:03:48.813 LINK reserve 00:03:48.813 CXX test/cpp_headers/conf.o 00:03:48.813 LINK hello_world 00:03:48.813 LINK simple_copy 00:03:48.813 CC test/nvme/connect_stress/connect_stress.o 00:03:48.813 CXX test/cpp_headers/config.o 00:03:48.813 CXX test/cpp_headers/cpuset.o 00:03:49.073 CC test/nvme/boot_partition/boot_partition.o 00:03:49.073 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:49.073 LINK memory_ut 00:03:49.073 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:49.073 CC examples/nvme/reconnect/reconnect.o 00:03:49.073 CC examples/bdev/hello_world/hello_bdev.o 00:03:49.073 CC test/bdev/bdevio/bdevio.o 00:03:49.073 LINK connect_stress 00:03:49.073 CXX test/cpp_headers/crc16.o 00:03:49.073 LINK boot_partition 00:03:49.073 CXX test/cpp_headers/crc32.o 00:03:49.331 LINK hello_fsdev 00:03:49.331 LINK hello_bdev 00:03:49.331 CXX test/cpp_headers/crc64.o 00:03:49.331 CC test/nvme/compliance/nvme_compliance.o 00:03:49.331 CC test/nvme/fused_ordering/fused_ordering.o 00:03:49.331 LINK reconnect 00:03:49.590 CXX test/cpp_headers/dif.o 00:03:49.590 CC examples/bdev/bdevperf/bdevperf.o 00:03:49.590 LINK bdevio 00:03:49.590 CC examples/nvme/arbitration/arbitration.o 00:03:49.590 CC examples/nvme/hotplug/hotplug.o 00:03:49.590 LINK fused_ordering 00:03:49.590 LINK nvme_manage 00:03:49.590 CXX test/cpp_headers/dma.o 00:03:49.590 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:49.850 CXX test/cpp_headers/endian.o 00:03:49.850 LINK nvme_compliance 00:03:49.850 LINK hotplug 00:03:49.850 CC test/nvme/fdp/fdp.o 00:03:49.850 CC test/nvme/cuse/cuse.o 00:03:49.850 LINK doorbell_aers 00:03:49.850 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.850 CXX test/cpp_headers/env_dpdk.o 00:03:49.850 CXX test/cpp_headers/env.o 00:03:50.108 LINK arbitration 00:03:50.108 CXX test/cpp_headers/event.o 00:03:50.109 CC examples/nvme/abort/abort.o 00:03:50.109 CXX test/cpp_headers/fd_group.o 00:03:50.109 CXX test/cpp_headers/fd.o 00:03:50.109 LINK cmb_copy 00:03:50.109 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.367 LINK fdp 00:03:50.367 CXX test/cpp_headers/file.o 00:03:50.367 CXX test/cpp_headers/fsdev.o 00:03:50.367 CXX test/cpp_headers/fsdev_module.o 00:03:50.367 CXX test/cpp_headers/ftl.o 00:03:50.367 LINK pmr_persistence 00:03:50.367 CXX test/cpp_headers/fuse_dispatcher.o 00:03:50.626 CXX test/cpp_headers/gpt_spec.o 00:03:50.626 CXX test/cpp_headers/hexlify.o 00:03:50.626 CXX test/cpp_headers/histogram_data.o 00:03:50.626 CXX test/cpp_headers/idxd.o 00:03:50.626 LINK bdevperf 00:03:50.626 CXX test/cpp_headers/idxd_spec.o 00:03:50.626 LINK abort 00:03:50.626 CXX test/cpp_headers/init.o 00:03:50.626 CXX test/cpp_headers/ioat.o 00:03:50.626 CXX test/cpp_headers/ioat_spec.o 00:03:50.626 CXX test/cpp_headers/iscsi_spec.o 00:03:50.626 CXX test/cpp_headers/json.o 00:03:50.626 CXX test/cpp_headers/jsonrpc.o 00:03:50.885 CXX test/cpp_headers/keyring.o 00:03:50.885 CXX test/cpp_headers/keyring_module.o 00:03:50.885 CXX test/cpp_headers/likely.o 00:03:50.885 CXX test/cpp_headers/log.o 00:03:50.885 CXX test/cpp_headers/lvol.o 00:03:50.885 CXX test/cpp_headers/md5.o 00:03:50.885 CXX test/cpp_headers/memory.o 00:03:50.885 CXX test/cpp_headers/mmio.o 00:03:50.885 CC examples/nvmf/nvmf/nvmf.o 00:03:51.144 CXX test/cpp_headers/nbd.o 00:03:51.144 CXX test/cpp_headers/net.o 00:03:51.144 CXX test/cpp_headers/notify.o 00:03:51.144 CXX test/cpp_headers/nvme.o 00:03:51.144 CXX test/cpp_headers/nvme_intel.o 00:03:51.144 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.144 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:51.144 CXX test/cpp_headers/nvme_spec.o 00:03:51.144 CXX test/cpp_headers/nvme_zns.o 00:03:51.144 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.144 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:51.403 CXX test/cpp_headers/nvmf.o 00:03:51.403 CXX test/cpp_headers/nvmf_spec.o 00:03:51.403 CXX test/cpp_headers/nvmf_transport.o 00:03:51.403 CXX test/cpp_headers/opal.o 00:03:51.403 LINK nvmf 00:03:51.403 CXX test/cpp_headers/opal_spec.o 00:03:51.403 CXX test/cpp_headers/pci_ids.o 00:03:51.660 CXX test/cpp_headers/pipe.o 00:03:51.660 CXX test/cpp_headers/queue.o 00:03:51.660 LINK cuse 00:03:51.660 CXX test/cpp_headers/reduce.o 00:03:51.660 CXX test/cpp_headers/rpc.o 00:03:51.660 CXX test/cpp_headers/scheduler.o 00:03:51.660 CXX test/cpp_headers/scsi.o 00:03:51.660 CXX test/cpp_headers/scsi_spec.o 00:03:51.660 CXX test/cpp_headers/sock.o 00:03:51.660 CXX test/cpp_headers/stdinc.o 00:03:51.660 CXX test/cpp_headers/string.o 00:03:51.660 CXX test/cpp_headers/thread.o 00:03:51.660 CXX test/cpp_headers/trace.o 00:03:51.660 CXX test/cpp_headers/trace_parser.o 00:03:51.660 CXX test/cpp_headers/tree.o 00:03:51.919 CXX test/cpp_headers/ublk.o 00:03:51.919 CXX test/cpp_headers/util.o 00:03:51.919 CXX test/cpp_headers/uuid.o 00:03:51.919 CXX test/cpp_headers/version.o 00:03:51.919 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.919 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.919 CXX test/cpp_headers/vhost.o 00:03:51.919 CXX test/cpp_headers/vmd.o 00:03:51.919 CXX test/cpp_headers/xor.o 00:03:51.919 CXX test/cpp_headers/zipf.o 00:03:55.215 LINK esnap 00:03:55.474 00:03:55.475 real 1m40.398s 00:03:55.475 user 9m24.298s 00:03:55.475 sys 1m37.444s 00:03:55.475 02:48:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:55.475 02:48:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:55.475 ************************************ 00:03:55.475 END TEST make 00:03:55.475 ************************************ 00:03:55.475 02:48:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:55.475 02:48:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:55.475 02:48:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:55.475 02:48:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.475 02:48:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:55.475 02:48:26 -- pm/common@44 -- $ pid=5295 00:03:55.475 02:48:26 -- pm/common@50 -- $ kill -TERM 5295 00:03:55.475 02:48:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.475 02:48:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:55.475 02:48:26 -- pm/common@44 -- $ pid=5297 00:03:55.475 02:48:26 -- pm/common@50 -- $ kill -TERM 5297 00:03:55.475 02:48:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:55.475 02:48:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:55.735 02:48:26 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:55.735 02:48:26 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:55.735 02:48:26 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:55.735 02:48:26 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:55.735 02:48:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.735 02:48:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.735 02:48:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.735 02:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.735 02:48:26 -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.735 02:48:26 -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.735 02:48:26 -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.735 02:48:26 -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.735 02:48:26 -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.735 02:48:26 -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.735 02:48:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.735 02:48:26 -- scripts/common.sh@344 -- # case "$op" in 00:03:55.735 02:48:26 -- scripts/common.sh@345 -- # : 1 00:03:55.735 02:48:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.735 02:48:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.735 02:48:26 -- scripts/common.sh@365 -- # decimal 1 00:03:55.735 02:48:26 -- scripts/common.sh@353 -- # local d=1 00:03:55.735 02:48:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.735 02:48:26 -- scripts/common.sh@355 -- # echo 1 00:03:55.735 02:48:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.735 02:48:26 -- scripts/common.sh@366 -- # decimal 2 00:03:55.735 02:48:26 -- scripts/common.sh@353 -- # local d=2 00:03:55.735 02:48:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.735 02:48:26 -- scripts/common.sh@355 -- # echo 2 00:03:55.735 02:48:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.735 02:48:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.735 02:48:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.735 02:48:26 -- scripts/common.sh@368 -- # return 0 00:03:55.735 02:48:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.735 02:48:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:55.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.735 --rc genhtml_branch_coverage=1 00:03:55.735 --rc genhtml_function_coverage=1 00:03:55.735 --rc genhtml_legend=1 00:03:55.735 --rc geninfo_all_blocks=1 00:03:55.735 --rc geninfo_unexecuted_blocks=1 00:03:55.735 00:03:55.735 ' 00:03:55.735 02:48:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:55.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.735 --rc genhtml_branch_coverage=1 00:03:55.735 --rc genhtml_function_coverage=1 00:03:55.735 --rc genhtml_legend=1 00:03:55.735 --rc geninfo_all_blocks=1 00:03:55.735 --rc geninfo_unexecuted_blocks=1 00:03:55.735 00:03:55.735 ' 00:03:55.735 02:48:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:55.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.735 --rc genhtml_branch_coverage=1 00:03:55.735 --rc genhtml_function_coverage=1 00:03:55.735 --rc genhtml_legend=1 00:03:55.735 --rc geninfo_all_blocks=1 00:03:55.735 --rc geninfo_unexecuted_blocks=1 00:03:55.735 00:03:55.735 ' 00:03:55.735 02:48:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:55.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.735 --rc genhtml_branch_coverage=1 00:03:55.735 --rc genhtml_function_coverage=1 00:03:55.735 --rc genhtml_legend=1 00:03:55.735 --rc geninfo_all_blocks=1 00:03:55.735 --rc geninfo_unexecuted_blocks=1 00:03:55.735 00:03:55.735 ' 00:03:55.735 02:48:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:55.735 02:48:26 -- nvmf/common.sh@7 -- # uname -s 00:03:55.735 02:48:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:55.735 02:48:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:55.735 02:48:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:55.735 02:48:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:55.735 02:48:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:55.735 02:48:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:55.735 02:48:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:55.735 02:48:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:55.735 02:48:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:55.735 02:48:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:55.735 02:48:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:03:55.735 02:48:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:03:55.735 02:48:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:55.735 02:48:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:55.735 02:48:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:55.735 02:48:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:55.735 02:48:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:55.735 02:48:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:55.735 02:48:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:55.735 02:48:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:55.735 02:48:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:55.736 02:48:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.736 02:48:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.736 02:48:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.736 02:48:26 -- paths/export.sh@5 -- # export PATH 00:03:55.736 02:48:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.736 02:48:26 -- nvmf/common.sh@51 -- # : 0 00:03:55.736 02:48:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:55.736 02:48:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:55.736 02:48:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:55.736 02:48:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:55.736 02:48:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:55.736 02:48:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:55.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:55.736 02:48:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:55.736 02:48:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:55.736 02:48:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:55.736 02:48:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:55.736 02:48:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:55.736 02:48:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:55.736 02:48:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:55.736 02:48:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.736 02:48:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:55.736 02:48:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.736 02:48:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:55.736 02:48:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:55.736 02:48:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:55.736 02:48:26 -- spdk/autotest.sh@48 -- # udevadm_pid=55039 00:03:55.736 02:48:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:55.736 02:48:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:55.736 02:48:26 -- pm/common@17 -- # local monitor 00:03:55.736 02:48:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.736 02:48:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.736 02:48:26 -- pm/common@21 -- # date +%s 00:03:55.736 02:48:26 -- pm/common@25 -- # sleep 1 00:03:55.736 02:48:26 -- pm/common@21 -- # date +%s 00:03:55.736 02:48:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733366906 00:03:55.736 02:48:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733366906 00:03:55.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733366906_collect-cpu-load.pm.log 00:03:55.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733366906_collect-vmstat.pm.log 00:03:56.932 02:48:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:56.932 02:48:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:56.932 02:48:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.932 02:48:27 -- common/autotest_common.sh@10 -- # set +x 00:03:56.932 02:48:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:56.932 02:48:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:56.932 02:48:27 -- common/autotest_common.sh@10 -- # set +x 00:03:56.932 02:48:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:56.932 02:48:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:56.932 02:48:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:56.932 02:48:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:56.932 02:48:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:56.932 02:48:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:56.932 02:48:27 -- common/autotest_common.sh@1457 -- # uname 00:03:56.932 02:48:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:56.932 02:48:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:56.932 02:48:27 -- common/autotest_common.sh@1477 -- # uname 00:03:56.932 02:48:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:56.932 02:48:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:56.932 02:48:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:56.932 lcov: LCOV version 1.15 00:03:56.932 02:48:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:11.819 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:26.713 02:48:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:26.713 02:48:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.713 02:48:57 -- common/autotest_common.sh@10 -- # set +x 00:04:26.713 02:48:57 -- spdk/autotest.sh@78 -- # rm -f 00:04:26.713 02:48:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.282 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:27.282 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:27.282 02:48:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:27.282 02:48:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:27.282 02:48:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:27.282 02:48:57 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:27.282 02:48:57 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:27.282 02:48:57 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:27.282 02:48:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:27.282 02:48:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:27.282 02:48:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:27.282 02:48:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:27.282 02:48:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:27.282 02:48:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:27.282 02:48:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:27.282 02:48:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:27.282 02:48:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:27.282 02:48:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:27.282 02:48:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:27.282 02:48:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:27.282 02:48:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:27.282 02:48:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:27.282 02:48:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.282 02:48:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.282 02:48:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:27.283 02:48:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:27.283 02:48:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.283 No valid GPT data, bailing 00:04:27.283 02:48:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.283 02:48:57 -- scripts/common.sh@394 -- # pt= 00:04:27.283 02:48:57 -- scripts/common.sh@395 -- # return 1 00:04:27.283 02:48:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.283 1+0 records in 00:04:27.283 1+0 records out 00:04:27.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00341798 s, 307 MB/s 00:04:27.283 02:48:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.283 02:48:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.283 02:48:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:27.283 02:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:27.283 02:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:27.283 No valid GPT data, bailing 00:04:27.283 02:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:27.283 02:48:58 -- scripts/common.sh@394 -- # pt= 00:04:27.283 02:48:58 -- scripts/common.sh@395 -- # return 1 00:04:27.283 02:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:27.283 1+0 records in 00:04:27.283 1+0 records out 00:04:27.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438018 s, 239 MB/s 00:04:27.283 02:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.283 02:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.283 02:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:27.283 02:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:27.283 02:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:27.541 No valid GPT data, bailing 00:04:27.542 02:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:27.542 02:48:58 -- scripts/common.sh@394 -- # pt= 00:04:27.542 02:48:58 -- scripts/common.sh@395 -- # return 1 00:04:27.542 02:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:27.542 1+0 records in 00:04:27.542 1+0 records out 00:04:27.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431527 s, 243 MB/s 00:04:27.542 02:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.542 02:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.542 02:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:27.542 02:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:27.542 02:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:27.542 No valid GPT data, bailing 00:04:27.542 02:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:27.542 02:48:58 -- scripts/common.sh@394 -- # pt= 00:04:27.542 02:48:58 -- scripts/common.sh@395 -- # return 1 00:04:27.542 02:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:27.542 1+0 records in 00:04:27.542 1+0 records out 00:04:27.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381084 s, 275 MB/s 00:04:27.542 02:48:58 -- spdk/autotest.sh@105 -- # sync 00:04:27.542 02:48:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:27.542 02:48:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:27.542 02:48:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:29.445 02:49:00 -- spdk/autotest.sh@111 -- # uname -s 00:04:29.445 02:49:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:29.445 02:49:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:29.445 02:49:00 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.381 Hugepages 00:04:30.381 node hugesize free / total 00:04:30.381 node0 1048576kB 0 / 0 00:04:30.381 node0 2048kB 0 / 0 00:04:30.381 00:04:30.381 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.381 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:30.381 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:30.381 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:30.381 02:49:01 -- spdk/autotest.sh@117 -- # uname -s 00:04:30.381 02:49:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:30.381 02:49:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:30.381 02:49:01 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.209 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.209 02:49:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:32.586 02:49:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:32.586 02:49:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:32.586 02:49:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.587 02:49:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.587 02:49:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.587 02:49:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.587 02:49:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.587 02:49:02 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.587 02:49:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.587 02:49:03 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:32.587 02:49:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:32.587 02:49:03 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.587 Waiting for block devices as requested 00:04:32.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.846 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.846 02:49:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.846 02:49:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:32.846 02:49:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.846 02:49:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:32.846 02:49:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1543 -- # continue 00:04:32.846 02:49:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.846 02:49:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:32.846 02:49:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.846 02:49:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:32.846 02:49:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.846 02:49:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:32.846 02:49:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.105 02:49:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.105 02:49:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.105 02:49:03 -- common/autotest_common.sh@1543 -- # continue 00:04:33.105 02:49:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:33.105 02:49:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.105 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:33.105 02:49:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:33.105 02:49:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.105 02:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:33.105 02:49:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.674 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.934 02:49:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:33.934 02:49:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.934 02:49:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.934 02:49:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:33.934 02:49:04 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:33.934 02:49:04 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.934 02:49:04 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:33.934 02:49:04 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:33.934 02:49:04 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:33.934 02:49:04 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:33.934 02:49:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:33.934 02:49:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:33.934 02:49:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:33.934 02:49:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.934 02:49:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.934 02:49:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:33.934 02:49:04 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:33.934 02:49:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:33.934 02:49:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:33.934 02:49:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:33.934 02:49:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:33.934 02:49:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.934 02:49:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:33.934 02:49:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:33.934 02:49:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:33.934 02:49:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.934 02:49:04 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:33.934 02:49:04 -- common/autotest_common.sh@1572 -- # return 0 00:04:33.934 02:49:04 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:33.934 02:49:04 -- common/autotest_common.sh@1580 -- # return 0 00:04:33.934 02:49:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:33.934 02:49:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:33.934 02:49:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.934 02:49:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.934 02:49:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:33.934 02:49:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.934 02:49:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.934 02:49:04 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:33.934 02:49:04 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:33.934 02:49:04 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:33.934 02:49:04 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.934 02:49:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.934 02:49:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.934 02:49:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.934 ************************************ 00:04:33.934 START TEST env 00:04:33.934 ************************************ 00:04:33.934 02:49:04 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.195 * Looking for test storage... 00:04:34.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.195 02:49:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.195 02:49:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.195 02:49:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.195 02:49:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.195 02:49:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.195 02:49:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.195 02:49:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.195 02:49:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.195 02:49:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.195 02:49:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.195 02:49:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.195 02:49:04 env -- scripts/common.sh@344 -- # case "$op" in 00:04:34.195 02:49:04 env -- scripts/common.sh@345 -- # : 1 00:04:34.195 02:49:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.195 02:49:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.195 02:49:04 env -- scripts/common.sh@365 -- # decimal 1 00:04:34.195 02:49:04 env -- scripts/common.sh@353 -- # local d=1 00:04:34.195 02:49:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.195 02:49:04 env -- scripts/common.sh@355 -- # echo 1 00:04:34.195 02:49:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.195 02:49:04 env -- scripts/common.sh@366 -- # decimal 2 00:04:34.195 02:49:04 env -- scripts/common.sh@353 -- # local d=2 00:04:34.195 02:49:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.195 02:49:04 env -- scripts/common.sh@355 -- # echo 2 00:04:34.195 02:49:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.195 02:49:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.195 02:49:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.195 02:49:04 env -- scripts/common.sh@368 -- # return 0 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.195 --rc genhtml_branch_coverage=1 00:04:34.195 --rc genhtml_function_coverage=1 00:04:34.195 --rc genhtml_legend=1 00:04:34.195 --rc geninfo_all_blocks=1 00:04:34.195 --rc geninfo_unexecuted_blocks=1 00:04:34.195 00:04:34.195 ' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.195 --rc genhtml_branch_coverage=1 00:04:34.195 --rc genhtml_function_coverage=1 00:04:34.195 --rc genhtml_legend=1 00:04:34.195 --rc geninfo_all_blocks=1 00:04:34.195 --rc geninfo_unexecuted_blocks=1 00:04:34.195 00:04:34.195 ' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.195 --rc genhtml_branch_coverage=1 00:04:34.195 --rc genhtml_function_coverage=1 00:04:34.195 --rc genhtml_legend=1 00:04:34.195 --rc geninfo_all_blocks=1 00:04:34.195 --rc geninfo_unexecuted_blocks=1 00:04:34.195 00:04:34.195 ' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.195 --rc genhtml_branch_coverage=1 00:04:34.195 --rc genhtml_function_coverage=1 00:04:34.195 --rc genhtml_legend=1 00:04:34.195 --rc geninfo_all_blocks=1 00:04:34.195 --rc geninfo_unexecuted_blocks=1 00:04:34.195 00:04:34.195 ' 00:04:34.195 02:49:04 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.195 02:49:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.195 02:49:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.195 ************************************ 00:04:34.195 START TEST env_memory 00:04:34.195 ************************************ 00:04:34.195 02:49:04 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.195 00:04:34.195 00:04:34.195 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.195 http://cunit.sourceforge.net/ 00:04:34.195 00:04:34.195 00:04:34.195 Suite: memory 00:04:34.195 Test: alloc and free memory map ...[2024-12-05 02:49:04.976463] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.195 passed 00:04:34.195 Test: mem map translation ...[2024-12-05 02:49:05.036997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.195 [2024-12-05 02:49:05.037073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.195 [2024-12-05 02:49:05.037181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.195 [2024-12-05 02:49:05.037218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.453 passed 00:04:34.453 Test: mem map registration ...[2024-12-05 02:49:05.136570] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:34.454 [2024-12-05 02:49:05.136656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:34.454 passed 00:04:34.454 Test: mem map adjacent registrations ...passed 00:04:34.454 00:04:34.454 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.454 suites 1 1 n/a 0 0 00:04:34.454 tests 4 4 4 0 0 00:04:34.454 asserts 152 152 152 0 n/a 00:04:34.454 00:04:34.454 Elapsed time = 0.340 seconds 00:04:34.454 00:04:34.454 real 0m0.379s 00:04:34.454 user 0m0.341s 00:04:34.454 sys 0m0.030s 00:04:34.454 02:49:05 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.454 02:49:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:34.454 ************************************ 00:04:34.454 END TEST env_memory 00:04:34.454 ************************************ 00:04:34.712 02:49:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.712 02:49:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.712 02:49:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.712 02:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.712 ************************************ 00:04:34.712 START TEST env_vtophys 00:04:34.712 ************************************ 00:04:34.712 02:49:05 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.712 EAL: lib.eal log level changed from notice to debug 00:04:34.712 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 1 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 2 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 3 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 4 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 5 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 6 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 7 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 8 as core 0 on socket 0 00:04:34.712 EAL: Detected lcore 9 as core 0 on socket 0 00:04:34.712 EAL: Maximum logical cores by configuration: 128 00:04:34.712 EAL: Detected CPU lcores: 10 00:04:34.712 EAL: Detected NUMA nodes: 1 00:04:34.712 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.712 EAL: Detected shared linkage of DPDK 00:04:34.712 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.712 EAL: Selected IOVA mode 'PA' 00:04:34.712 EAL: Probing VFIO support... 00:04:34.712 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:34.712 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:34.712 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.712 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.712 EAL: Setting up physically contiguous memory... 00:04:34.712 EAL: Setting maximum number of open files to 524288 00:04:34.712 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.712 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.712 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.712 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.712 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.712 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.712 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.712 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.712 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.712 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.712 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.712 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.712 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.712 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.712 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.712 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.712 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.712 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.712 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.712 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.712 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.712 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.712 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.712 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.712 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.712 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.712 EAL: Hugepages will be freed exactly as allocated. 00:04:34.712 EAL: No shared files mode enabled, IPC is disabled 00:04:34.712 EAL: No shared files mode enabled, IPC is disabled 00:04:34.712 EAL: TSC frequency is ~2200000 KHz 00:04:34.712 EAL: Main lcore 0 is ready (tid=7f21c06b6a40;cpuset=[0]) 00:04:34.712 EAL: Trying to obtain current memory policy. 00:04:34.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.712 EAL: Restoring previous memory policy: 0 00:04:34.712 EAL: request: mp_malloc_sync 00:04:34.712 EAL: No shared files mode enabled, IPC is disabled 00:04:34.712 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.712 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:34.712 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.712 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.712 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:34.970 00:04:34.970 00:04:34.970 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.970 http://cunit.sourceforge.net/ 00:04:34.970 00:04:34.970 00:04:34.970 Suite: components_suite 00:04:35.228 Test: vtophys_malloc_test ...passed 00:04:35.228 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.228 EAL: Restoring previous memory policy: 4 00:04:35.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.228 EAL: request: mp_malloc_sync 00:04:35.228 EAL: No shared files mode enabled, IPC is disabled 00:04:35.228 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.228 EAL: request: mp_malloc_sync 00:04:35.228 EAL: No shared files mode enabled, IPC is disabled 00:04:35.228 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.228 EAL: Trying to obtain current memory policy. 00:04:35.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.228 EAL: Restoring previous memory policy: 4 00:04:35.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.228 EAL: request: mp_malloc_sync 00:04:35.228 EAL: No shared files mode enabled, IPC is disabled 00:04:35.228 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.229 EAL: Trying to obtain current memory policy. 00:04:35.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.229 EAL: Restoring previous memory policy: 4 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.229 EAL: Trying to obtain current memory policy. 00:04:35.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.229 EAL: Restoring previous memory policy: 4 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.229 EAL: Trying to obtain current memory policy. 00:04:35.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.229 EAL: Restoring previous memory policy: 4 00:04:35.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.229 EAL: request: mp_malloc_sync 00:04:35.229 EAL: No shared files mode enabled, IPC is disabled 00:04:35.229 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.487 EAL: request: mp_malloc_sync 00:04:35.487 EAL: No shared files mode enabled, IPC is disabled 00:04:35.487 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.487 EAL: Trying to obtain current memory policy. 00:04:35.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.487 EAL: Restoring previous memory policy: 4 00:04:35.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.487 EAL: request: mp_malloc_sync 00:04:35.487 EAL: No shared files mode enabled, IPC is disabled 00:04:35.487 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.487 EAL: request: mp_malloc_sync 00:04:35.487 EAL: No shared files mode enabled, IPC is disabled 00:04:35.487 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.487 EAL: Trying to obtain current memory policy. 00:04:35.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.487 EAL: Restoring previous memory policy: 4 00:04:35.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.487 EAL: request: mp_malloc_sync 00:04:35.487 EAL: No shared files mode enabled, IPC is disabled 00:04:35.487 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.745 EAL: request: mp_malloc_sync 00:04:35.745 EAL: No shared files mode enabled, IPC is disabled 00:04:35.745 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.004 EAL: Trying to obtain current memory policy. 00:04:36.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.004 EAL: Restoring previous memory policy: 4 00:04:36.004 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.004 EAL: request: mp_malloc_sync 00:04:36.004 EAL: No shared files mode enabled, IPC is disabled 00:04:36.004 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.262 EAL: request: mp_malloc_sync 00:04:36.262 EAL: No shared files mode enabled, IPC is disabled 00:04:36.262 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.521 EAL: Trying to obtain current memory policy. 00:04:36.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.781 EAL: Restoring previous memory policy: 4 00:04:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.781 EAL: request: mp_malloc_sync 00:04:36.781 EAL: No shared files mode enabled, IPC is disabled 00:04:36.781 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.350 EAL: request: mp_malloc_sync 00:04:37.350 EAL: No shared files mode enabled, IPC is disabled 00:04:37.350 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.917 EAL: Trying to obtain current memory policy. 00:04:37.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.176 EAL: Restoring previous memory policy: 4 00:04:38.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.176 EAL: request: mp_malloc_sync 00:04:38.176 EAL: No shared files mode enabled, IPC is disabled 00:04:38.176 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.555 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.555 EAL: request: mp_malloc_sync 00:04:39.555 EAL: No shared files mode enabled, IPC is disabled 00:04:39.555 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:40.930 passed 00:04:40.930 00:04:40.930 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.930 suites 1 1 n/a 0 0 00:04:40.930 tests 2 2 2 0 0 00:04:40.930 asserts 5838 5838 5838 0 n/a 00:04:40.930 00:04:40.930 Elapsed time = 5.929 seconds 00:04:40.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.930 EAL: request: mp_malloc_sync 00:04:40.930 EAL: No shared files mode enabled, IPC is disabled 00:04:40.930 EAL: Heap on socket 0 was shrunk by 2MB 00:04:40.930 EAL: No shared files mode enabled, IPC is disabled 00:04:40.930 EAL: No shared files mode enabled, IPC is disabled 00:04:40.930 EAL: No shared files mode enabled, IPC is disabled 00:04:40.930 00:04:40.930 real 0m6.261s 00:04:40.930 user 0m5.423s 00:04:40.930 sys 0m0.682s 00:04:40.930 02:49:11 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.930 02:49:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:40.930 ************************************ 00:04:40.930 END TEST env_vtophys 00:04:40.930 ************************************ 00:04:40.930 02:49:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:40.930 02:49:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.930 02:49:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.930 02:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.930 ************************************ 00:04:40.930 START TEST env_pci 00:04:40.930 ************************************ 00:04:40.930 02:49:11 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:40.930 00:04:40.930 00:04:40.930 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.930 http://cunit.sourceforge.net/ 00:04:40.930 00:04:40.930 00:04:40.930 Suite: pci 00:04:40.930 Test: pci_hook ...[2024-12-05 02:49:11.686488] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57311 has claimed it 00:04:40.930 EAL: Cannot find device (10000:00:01.0) 00:04:40.930 EAL: Failed to attach device on primary process 00:04:40.930 passed 00:04:40.930 00:04:40.930 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.930 suites 1 1 n/a 0 0 00:04:40.930 tests 1 1 1 0 0 00:04:40.930 asserts 25 25 25 0 n/a 00:04:40.930 00:04:40.930 Elapsed time = 0.007 seconds 00:04:40.930 00:04:40.930 real 0m0.080s 00:04:40.930 user 0m0.040s 00:04:40.930 sys 0m0.039s 00:04:40.930 02:49:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.930 02:49:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:40.930 ************************************ 00:04:40.930 END TEST env_pci 00:04:40.930 ************************************ 00:04:41.190 02:49:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.190 02:49:11 env -- env/env.sh@15 -- # uname 00:04:41.190 02:49:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.190 02:49:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.190 02:49:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.190 02:49:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:41.190 02:49:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.190 02:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.190 ************************************ 00:04:41.190 START TEST env_dpdk_post_init 00:04:41.190 ************************************ 00:04:41.190 02:49:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.190 EAL: Detected CPU lcores: 10 00:04:41.190 EAL: Detected NUMA nodes: 1 00:04:41.190 EAL: Detected shared linkage of DPDK 00:04:41.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.190 EAL: Selected IOVA mode 'PA' 00:04:41.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.449 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.450 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.450 Starting DPDK initialization... 00:04:41.450 Starting SPDK post initialization... 00:04:41.450 SPDK NVMe probe 00:04:41.450 Attaching to 0000:00:10.0 00:04:41.450 Attaching to 0000:00:11.0 00:04:41.450 Attached to 0000:00:10.0 00:04:41.450 Attached to 0000:00:11.0 00:04:41.450 Cleaning up... 00:04:41.450 00:04:41.450 real 0m0.300s 00:04:41.450 user 0m0.103s 00:04:41.450 sys 0m0.096s 00:04:41.450 02:49:12 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.450 02:49:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.450 ************************************ 00:04:41.450 END TEST env_dpdk_post_init 00:04:41.450 ************************************ 00:04:41.450 02:49:12 env -- env/env.sh@26 -- # uname 00:04:41.450 02:49:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.450 02:49:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.450 02:49:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.450 02:49:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.450 02:49:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.450 ************************************ 00:04:41.450 START TEST env_mem_callbacks 00:04:41.450 ************************************ 00:04:41.450 02:49:12 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.450 EAL: Detected CPU lcores: 10 00:04:41.450 EAL: Detected NUMA nodes: 1 00:04:41.450 EAL: Detected shared linkage of DPDK 00:04:41.450 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.450 EAL: Selected IOVA mode 'PA' 00:04:41.710 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.710 00:04:41.710 00:04:41.710 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.710 http://cunit.sourceforge.net/ 00:04:41.710 00:04:41.710 00:04:41.710 Suite: memory 00:04:41.710 Test: test ... 00:04:41.710 register 0x200000200000 2097152 00:04:41.710 malloc 3145728 00:04:41.710 register 0x200000400000 4194304 00:04:41.710 buf 0x2000004fffc0 len 3145728 PASSED 00:04:41.710 malloc 64 00:04:41.710 buf 0x2000004ffec0 len 64 PASSED 00:04:41.710 malloc 4194304 00:04:41.710 register 0x200000800000 6291456 00:04:41.710 buf 0x2000009fffc0 len 4194304 PASSED 00:04:41.710 free 0x2000004fffc0 3145728 00:04:41.710 free 0x2000004ffec0 64 00:04:41.710 unregister 0x200000400000 4194304 PASSED 00:04:41.710 free 0x2000009fffc0 4194304 00:04:41.710 unregister 0x200000800000 6291456 PASSED 00:04:41.710 malloc 8388608 00:04:41.710 register 0x200000400000 10485760 00:04:41.710 buf 0x2000005fffc0 len 8388608 PASSED 00:04:41.710 free 0x2000005fffc0 8388608 00:04:41.710 unregister 0x200000400000 10485760 PASSED 00:04:41.710 passed 00:04:41.710 00:04:41.710 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.710 suites 1 1 n/a 0 0 00:04:41.710 tests 1 1 1 0 0 00:04:41.710 asserts 15 15 15 0 n/a 00:04:41.710 00:04:41.710 Elapsed time = 0.061 seconds 00:04:41.710 00:04:41.710 real 0m0.266s 00:04:41.710 user 0m0.102s 00:04:41.710 sys 0m0.062s 00:04:41.710 02:49:12 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.710 02:49:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.710 ************************************ 00:04:41.710 END TEST env_mem_callbacks 00:04:41.710 ************************************ 00:04:41.710 00:04:41.710 real 0m7.752s 00:04:41.710 user 0m6.200s 00:04:41.710 sys 0m1.152s 00:04:41.710 02:49:12 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.710 02:49:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.710 ************************************ 00:04:41.710 END TEST env 00:04:41.710 ************************************ 00:04:41.710 02:49:12 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.710 02:49:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.710 02:49:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.710 02:49:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.710 ************************************ 00:04:41.710 START TEST rpc 00:04:41.710 ************************************ 00:04:41.710 02:49:12 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:41.970 * Looking for test storage... 00:04:41.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.970 02:49:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.970 02:49:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.970 02:49:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.970 02:49:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.970 02:49:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.970 02:49:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.970 02:49:12 rpc -- scripts/common.sh@345 -- # : 1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.970 02:49:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.970 02:49:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.970 02:49:12 rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.970 02:49:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.970 02:49:12 rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.970 02:49:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.970 02:49:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.970 02:49:12 rpc -- scripts/common.sh@368 -- # return 0 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.970 --rc genhtml_branch_coverage=1 00:04:41.970 --rc genhtml_function_coverage=1 00:04:41.970 --rc genhtml_legend=1 00:04:41.970 --rc geninfo_all_blocks=1 00:04:41.970 --rc geninfo_unexecuted_blocks=1 00:04:41.970 00:04:41.970 ' 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.970 --rc genhtml_branch_coverage=1 00:04:41.970 --rc genhtml_function_coverage=1 00:04:41.970 --rc genhtml_legend=1 00:04:41.970 --rc geninfo_all_blocks=1 00:04:41.970 --rc geninfo_unexecuted_blocks=1 00:04:41.970 00:04:41.970 ' 00:04:41.970 02:49:12 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.971 --rc genhtml_branch_coverage=1 00:04:41.971 --rc genhtml_function_coverage=1 00:04:41.971 --rc genhtml_legend=1 00:04:41.971 --rc geninfo_all_blocks=1 00:04:41.971 --rc geninfo_unexecuted_blocks=1 00:04:41.971 00:04:41.971 ' 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.971 --rc genhtml_branch_coverage=1 00:04:41.971 --rc genhtml_function_coverage=1 00:04:41.971 --rc genhtml_legend=1 00:04:41.971 --rc geninfo_all_blocks=1 00:04:41.971 --rc geninfo_unexecuted_blocks=1 00:04:41.971 00:04:41.971 ' 00:04:41.971 02:49:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57438 00:04:41.971 02:49:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.971 02:49:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57438 00:04:41.971 02:49:12 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@835 -- # '[' -z 57438 ']' 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.971 02:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.230 [2024-12-05 02:49:12.848910] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:42.230 [2024-12-05 02:49:12.849126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57438 ] 00:04:42.230 [2024-12-05 02:49:13.043141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.488 [2024-12-05 02:49:13.176255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.488 [2024-12-05 02:49:13.176355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57438' to capture a snapshot of events at runtime. 00:04:42.488 [2024-12-05 02:49:13.176378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.489 [2024-12-05 02:49:13.176396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.489 [2024-12-05 02:49:13.176410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57438 for offline analysis/debug. 00:04:42.489 [2024-12-05 02:49:13.177910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.748 [2024-12-05 02:49:13.408412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:43.315 02:49:13 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.316 02:49:13 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.316 02:49:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.316 02:49:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.316 02:49:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.316 02:49:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.316 02:49:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.316 02:49:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.316 02:49:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 ************************************ 00:04:43.316 START TEST rpc_integrity 00:04:43.316 ************************************ 00:04:43.316 02:49:13 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.316 02:49:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.316 02:49:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.316 02:49:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 02:49:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.316 02:49:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.316 02:49:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.316 { 00:04:43.316 "name": "Malloc0", 00:04:43.316 "aliases": [ 00:04:43.316 "5ee6554c-3b85-441c-bce5-ac2faf91b59a" 00:04:43.316 ], 00:04:43.316 "product_name": "Malloc disk", 00:04:43.316 "block_size": 512, 00:04:43.316 "num_blocks": 16384, 00:04:43.316 "uuid": "5ee6554c-3b85-441c-bce5-ac2faf91b59a", 00:04:43.316 "assigned_rate_limits": { 00:04:43.316 "rw_ios_per_sec": 0, 00:04:43.316 "rw_mbytes_per_sec": 0, 00:04:43.316 "r_mbytes_per_sec": 0, 00:04:43.316 "w_mbytes_per_sec": 0 00:04:43.316 }, 00:04:43.316 "claimed": false, 00:04:43.316 "zoned": false, 00:04:43.316 "supported_io_types": { 00:04:43.316 "read": true, 00:04:43.316 "write": true, 00:04:43.316 "unmap": true, 00:04:43.316 "flush": true, 00:04:43.316 "reset": true, 00:04:43.316 "nvme_admin": false, 00:04:43.316 "nvme_io": false, 00:04:43.316 "nvme_io_md": false, 00:04:43.316 "write_zeroes": true, 00:04:43.316 "zcopy": true, 00:04:43.316 "get_zone_info": false, 00:04:43.316 "zone_management": false, 00:04:43.316 "zone_append": false, 00:04:43.316 "compare": false, 00:04:43.316 "compare_and_write": false, 00:04:43.316 "abort": true, 00:04:43.316 "seek_hole": false, 00:04:43.316 "seek_data": false, 00:04:43.316 "copy": true, 00:04:43.316 "nvme_iov_md": false 00:04:43.316 }, 00:04:43.316 "memory_domains": [ 00:04:43.316 { 00:04:43.316 "dma_device_id": "system", 00:04:43.316 "dma_device_type": 1 00:04:43.316 }, 00:04:43.316 { 00:04:43.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.316 "dma_device_type": 2 00:04:43.316 } 00:04:43.316 ], 00:04:43.316 "driver_specific": {} 00:04:43.316 } 00:04:43.316 ]' 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 [2024-12-05 02:49:14.120521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.316 [2024-12-05 02:49:14.120602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.316 [2024-12-05 02:49:14.120638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:43.316 [2024-12-05 02:49:14.120654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.316 [2024-12-05 02:49:14.123507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.316 [2024-12-05 02:49:14.123566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.316 Passthru0 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.316 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.316 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.316 { 00:04:43.316 "name": "Malloc0", 00:04:43.316 "aliases": [ 00:04:43.316 "5ee6554c-3b85-441c-bce5-ac2faf91b59a" 00:04:43.316 ], 00:04:43.316 "product_name": "Malloc disk", 00:04:43.316 "block_size": 512, 00:04:43.316 "num_blocks": 16384, 00:04:43.316 "uuid": "5ee6554c-3b85-441c-bce5-ac2faf91b59a", 00:04:43.316 "assigned_rate_limits": { 00:04:43.316 "rw_ios_per_sec": 0, 00:04:43.316 "rw_mbytes_per_sec": 0, 00:04:43.316 "r_mbytes_per_sec": 0, 00:04:43.316 "w_mbytes_per_sec": 0 00:04:43.316 }, 00:04:43.316 "claimed": true, 00:04:43.316 "claim_type": "exclusive_write", 00:04:43.316 "zoned": false, 00:04:43.316 "supported_io_types": { 00:04:43.316 "read": true, 00:04:43.316 "write": true, 00:04:43.316 "unmap": true, 00:04:43.316 "flush": true, 00:04:43.316 "reset": true, 00:04:43.316 "nvme_admin": false, 00:04:43.316 "nvme_io": false, 00:04:43.316 "nvme_io_md": false, 00:04:43.316 "write_zeroes": true, 00:04:43.316 "zcopy": true, 00:04:43.316 "get_zone_info": false, 00:04:43.316 "zone_management": false, 00:04:43.316 "zone_append": false, 00:04:43.316 "compare": false, 00:04:43.316 "compare_and_write": false, 00:04:43.316 "abort": true, 00:04:43.316 "seek_hole": false, 00:04:43.316 "seek_data": false, 00:04:43.316 "copy": true, 00:04:43.316 "nvme_iov_md": false 00:04:43.316 }, 00:04:43.316 "memory_domains": [ 00:04:43.316 { 00:04:43.316 "dma_device_id": "system", 00:04:43.316 "dma_device_type": 1 00:04:43.316 }, 00:04:43.316 { 00:04:43.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.316 "dma_device_type": 2 00:04:43.316 } 00:04:43.316 ], 00:04:43.316 "driver_specific": {} 00:04:43.316 }, 00:04:43.316 { 00:04:43.316 "name": "Passthru0", 00:04:43.316 "aliases": [ 00:04:43.316 "8c3a53e2-237f-5e8e-a092-9aed46d5f95e" 00:04:43.316 ], 00:04:43.316 "product_name": "passthru", 00:04:43.316 "block_size": 512, 00:04:43.316 "num_blocks": 16384, 00:04:43.316 "uuid": "8c3a53e2-237f-5e8e-a092-9aed46d5f95e", 00:04:43.316 "assigned_rate_limits": { 00:04:43.316 "rw_ios_per_sec": 0, 00:04:43.316 "rw_mbytes_per_sec": 0, 00:04:43.316 "r_mbytes_per_sec": 0, 00:04:43.316 "w_mbytes_per_sec": 0 00:04:43.316 }, 00:04:43.316 "claimed": false, 00:04:43.316 "zoned": false, 00:04:43.316 "supported_io_types": { 00:04:43.316 "read": true, 00:04:43.316 "write": true, 00:04:43.316 "unmap": true, 00:04:43.316 "flush": true, 00:04:43.316 "reset": true, 00:04:43.316 "nvme_admin": false, 00:04:43.316 "nvme_io": false, 00:04:43.316 "nvme_io_md": false, 00:04:43.316 "write_zeroes": true, 00:04:43.316 "zcopy": true, 00:04:43.316 "get_zone_info": false, 00:04:43.316 "zone_management": false, 00:04:43.316 "zone_append": false, 00:04:43.316 "compare": false, 00:04:43.316 "compare_and_write": false, 00:04:43.316 "abort": true, 00:04:43.316 "seek_hole": false, 00:04:43.316 "seek_data": false, 00:04:43.316 "copy": true, 00:04:43.316 "nvme_iov_md": false 00:04:43.316 }, 00:04:43.316 "memory_domains": [ 00:04:43.316 { 00:04:43.316 "dma_device_id": "system", 00:04:43.316 "dma_device_type": 1 00:04:43.316 }, 00:04:43.316 { 00:04:43.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.316 "dma_device_type": 2 00:04:43.316 } 00:04:43.316 ], 00:04:43.316 "driver_specific": { 00:04:43.316 "passthru": { 00:04:43.316 "name": "Passthru0", 00:04:43.316 "base_bdev_name": "Malloc0" 00:04:43.316 } 00:04:43.316 } 00:04:43.316 } 00:04:43.316 ]' 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.576 02:49:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.576 00:04:43.576 real 0m0.340s 00:04:43.576 user 0m0.207s 00:04:43.576 sys 0m0.040s 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.576 02:49:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 ************************************ 00:04:43.576 END TEST rpc_integrity 00:04:43.576 ************************************ 00:04:43.576 02:49:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.576 02:49:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.576 02:49:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.576 02:49:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 ************************************ 00:04:43.576 START TEST rpc_plugins 00:04:43.576 ************************************ 00:04:43.576 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:43.576 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.576 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.576 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.576 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.576 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.577 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.577 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.577 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.577 { 00:04:43.577 "name": "Malloc1", 00:04:43.577 "aliases": [ 00:04:43.577 "47833f27-4b91-4e00-b48d-4d2cc9d0fcff" 00:04:43.577 ], 00:04:43.577 "product_name": "Malloc disk", 00:04:43.577 "block_size": 4096, 00:04:43.577 "num_blocks": 256, 00:04:43.577 "uuid": "47833f27-4b91-4e00-b48d-4d2cc9d0fcff", 00:04:43.577 "assigned_rate_limits": { 00:04:43.577 "rw_ios_per_sec": 0, 00:04:43.577 "rw_mbytes_per_sec": 0, 00:04:43.577 "r_mbytes_per_sec": 0, 00:04:43.577 "w_mbytes_per_sec": 0 00:04:43.577 }, 00:04:43.577 "claimed": false, 00:04:43.577 "zoned": false, 00:04:43.577 "supported_io_types": { 00:04:43.577 "read": true, 00:04:43.577 "write": true, 00:04:43.577 "unmap": true, 00:04:43.577 "flush": true, 00:04:43.577 "reset": true, 00:04:43.577 "nvme_admin": false, 00:04:43.577 "nvme_io": false, 00:04:43.577 "nvme_io_md": false, 00:04:43.577 "write_zeroes": true, 00:04:43.577 "zcopy": true, 00:04:43.577 "get_zone_info": false, 00:04:43.577 "zone_management": false, 00:04:43.577 "zone_append": false, 00:04:43.577 "compare": false, 00:04:43.577 "compare_and_write": false, 00:04:43.577 "abort": true, 00:04:43.577 "seek_hole": false, 00:04:43.577 "seek_data": false, 00:04:43.577 "copy": true, 00:04:43.577 "nvme_iov_md": false 00:04:43.577 }, 00:04:43.577 "memory_domains": [ 00:04:43.577 { 00:04:43.577 "dma_device_id": "system", 00:04:43.577 "dma_device_type": 1 00:04:43.577 }, 00:04:43.577 { 00:04:43.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.577 "dma_device_type": 2 00:04:43.577 } 00:04:43.577 ], 00:04:43.577 "driver_specific": {} 00:04:43.577 } 00:04:43.577 ]' 00:04:43.577 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.838 02:49:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.838 00:04:43.838 real 0m0.173s 00:04:43.838 user 0m0.108s 00:04:43.838 sys 0m0.025s 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.838 ************************************ 00:04:43.838 END TEST rpc_plugins 00:04:43.838 ************************************ 00:04:43.838 02:49:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.838 02:49:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.838 02:49:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.838 02:49:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.838 02:49:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.838 ************************************ 00:04:43.838 START TEST rpc_trace_cmd_test 00:04:43.838 ************************************ 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.838 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.838 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57438", 00:04:43.838 "tpoint_group_mask": "0x8", 00:04:43.838 "iscsi_conn": { 00:04:43.838 "mask": "0x2", 00:04:43.838 "tpoint_mask": "0x0" 00:04:43.838 }, 00:04:43.838 "scsi": { 00:04:43.838 "mask": "0x4", 00:04:43.838 "tpoint_mask": "0x0" 00:04:43.838 }, 00:04:43.838 "bdev": { 00:04:43.838 "mask": "0x8", 00:04:43.838 "tpoint_mask": "0xffffffffffffffff" 00:04:43.838 }, 00:04:43.838 "nvmf_rdma": { 00:04:43.838 "mask": "0x10", 00:04:43.838 "tpoint_mask": "0x0" 00:04:43.838 }, 00:04:43.838 "nvmf_tcp": { 00:04:43.838 "mask": "0x20", 00:04:43.838 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "ftl": { 00:04:43.839 "mask": "0x40", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "blobfs": { 00:04:43.839 "mask": "0x80", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "dsa": { 00:04:43.839 "mask": "0x200", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "thread": { 00:04:43.839 "mask": "0x400", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "nvme_pcie": { 00:04:43.839 "mask": "0x800", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "iaa": { 00:04:43.839 "mask": "0x1000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "nvme_tcp": { 00:04:43.839 "mask": "0x2000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "bdev_nvme": { 00:04:43.839 "mask": "0x4000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "sock": { 00:04:43.839 "mask": "0x8000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "blob": { 00:04:43.839 "mask": "0x10000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "bdev_raid": { 00:04:43.839 "mask": "0x20000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 }, 00:04:43.839 "scheduler": { 00:04:43.839 "mask": "0x40000", 00:04:43.839 "tpoint_mask": "0x0" 00:04:43.839 } 00:04:43.839 }' 00:04:43.839 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.839 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:43.839 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.098 00:04:44.098 real 0m0.280s 00:04:44.098 user 0m0.240s 00:04:44.098 sys 0m0.029s 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.098 ************************************ 00:04:44.098 END TEST rpc_trace_cmd_test 00:04:44.098 02:49:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.098 ************************************ 00:04:44.098 02:49:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.098 02:49:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.098 02:49:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.098 02:49:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.098 02:49:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.098 02:49:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.098 ************************************ 00:04:44.098 START TEST rpc_daemon_integrity 00:04:44.098 ************************************ 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.098 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.358 02:49:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.358 { 00:04:44.358 "name": "Malloc2", 00:04:44.358 "aliases": [ 00:04:44.358 "1623c1cd-e876-482c-befa-7ba14d8eb3ef" 00:04:44.358 ], 00:04:44.358 "product_name": "Malloc disk", 00:04:44.358 "block_size": 512, 00:04:44.358 "num_blocks": 16384, 00:04:44.358 "uuid": "1623c1cd-e876-482c-befa-7ba14d8eb3ef", 00:04:44.358 "assigned_rate_limits": { 00:04:44.358 "rw_ios_per_sec": 0, 00:04:44.358 "rw_mbytes_per_sec": 0, 00:04:44.358 "r_mbytes_per_sec": 0, 00:04:44.358 "w_mbytes_per_sec": 0 00:04:44.358 }, 00:04:44.358 "claimed": false, 00:04:44.358 "zoned": false, 00:04:44.358 "supported_io_types": { 00:04:44.358 "read": true, 00:04:44.358 "write": true, 00:04:44.358 "unmap": true, 00:04:44.358 "flush": true, 00:04:44.358 "reset": true, 00:04:44.358 "nvme_admin": false, 00:04:44.358 "nvme_io": false, 00:04:44.358 "nvme_io_md": false, 00:04:44.358 "write_zeroes": true, 00:04:44.358 "zcopy": true, 00:04:44.358 "get_zone_info": false, 00:04:44.358 "zone_management": false, 00:04:44.358 "zone_append": false, 00:04:44.358 "compare": false, 00:04:44.358 "compare_and_write": false, 00:04:44.358 "abort": true, 00:04:44.358 "seek_hole": false, 00:04:44.358 "seek_data": false, 00:04:44.358 "copy": true, 00:04:44.358 "nvme_iov_md": false 00:04:44.358 }, 00:04:44.358 "memory_domains": [ 00:04:44.358 { 00:04:44.358 "dma_device_id": "system", 00:04:44.358 "dma_device_type": 1 00:04:44.358 }, 00:04:44.358 { 00:04:44.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.358 "dma_device_type": 2 00:04:44.358 } 00:04:44.358 ], 00:04:44.358 "driver_specific": {} 00:04:44.358 } 00:04:44.358 ]' 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.358 [2024-12-05 02:49:15.073062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.358 [2024-12-05 02:49:15.073172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.358 [2024-12-05 02:49:15.073201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:44.358 [2024-12-05 02:49:15.073214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.358 [2024-12-05 02:49:15.075702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.358 [2024-12-05 02:49:15.075741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.358 Passthru0 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.358 { 00:04:44.358 "name": "Malloc2", 00:04:44.358 "aliases": [ 00:04:44.358 "1623c1cd-e876-482c-befa-7ba14d8eb3ef" 00:04:44.358 ], 00:04:44.358 "product_name": "Malloc disk", 00:04:44.358 "block_size": 512, 00:04:44.358 "num_blocks": 16384, 00:04:44.358 "uuid": "1623c1cd-e876-482c-befa-7ba14d8eb3ef", 00:04:44.358 "assigned_rate_limits": { 00:04:44.358 "rw_ios_per_sec": 0, 00:04:44.358 "rw_mbytes_per_sec": 0, 00:04:44.358 "r_mbytes_per_sec": 0, 00:04:44.358 "w_mbytes_per_sec": 0 00:04:44.358 }, 00:04:44.358 "claimed": true, 00:04:44.358 "claim_type": "exclusive_write", 00:04:44.358 "zoned": false, 00:04:44.358 "supported_io_types": { 00:04:44.358 "read": true, 00:04:44.358 "write": true, 00:04:44.358 "unmap": true, 00:04:44.358 "flush": true, 00:04:44.358 "reset": true, 00:04:44.358 "nvme_admin": false, 00:04:44.358 "nvme_io": false, 00:04:44.358 "nvme_io_md": false, 00:04:44.358 "write_zeroes": true, 00:04:44.358 "zcopy": true, 00:04:44.358 "get_zone_info": false, 00:04:44.358 "zone_management": false, 00:04:44.358 "zone_append": false, 00:04:44.358 "compare": false, 00:04:44.358 "compare_and_write": false, 00:04:44.358 "abort": true, 00:04:44.358 "seek_hole": false, 00:04:44.358 "seek_data": false, 00:04:44.358 "copy": true, 00:04:44.358 "nvme_iov_md": false 00:04:44.358 }, 00:04:44.358 "memory_domains": [ 00:04:44.358 { 00:04:44.358 "dma_device_id": "system", 00:04:44.358 "dma_device_type": 1 00:04:44.358 }, 00:04:44.358 { 00:04:44.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.358 "dma_device_type": 2 00:04:44.358 } 00:04:44.358 ], 00:04:44.358 "driver_specific": {} 00:04:44.358 }, 00:04:44.358 { 00:04:44.358 "name": "Passthru0", 00:04:44.358 "aliases": [ 00:04:44.358 "42ec708c-05a7-5c27-9590-5941c7e5b736" 00:04:44.358 ], 00:04:44.358 "product_name": "passthru", 00:04:44.358 "block_size": 512, 00:04:44.358 "num_blocks": 16384, 00:04:44.358 "uuid": "42ec708c-05a7-5c27-9590-5941c7e5b736", 00:04:44.358 "assigned_rate_limits": { 00:04:44.358 "rw_ios_per_sec": 0, 00:04:44.358 "rw_mbytes_per_sec": 0, 00:04:44.358 "r_mbytes_per_sec": 0, 00:04:44.358 "w_mbytes_per_sec": 0 00:04:44.358 }, 00:04:44.358 "claimed": false, 00:04:44.358 "zoned": false, 00:04:44.358 "supported_io_types": { 00:04:44.358 "read": true, 00:04:44.358 "write": true, 00:04:44.358 "unmap": true, 00:04:44.358 "flush": true, 00:04:44.358 "reset": true, 00:04:44.358 "nvme_admin": false, 00:04:44.358 "nvme_io": false, 00:04:44.358 "nvme_io_md": false, 00:04:44.358 "write_zeroes": true, 00:04:44.358 "zcopy": true, 00:04:44.358 "get_zone_info": false, 00:04:44.358 "zone_management": false, 00:04:44.358 "zone_append": false, 00:04:44.358 "compare": false, 00:04:44.358 "compare_and_write": false, 00:04:44.358 "abort": true, 00:04:44.358 "seek_hole": false, 00:04:44.358 "seek_data": false, 00:04:44.358 "copy": true, 00:04:44.358 "nvme_iov_md": false 00:04:44.358 }, 00:04:44.358 "memory_domains": [ 00:04:44.358 { 00:04:44.358 "dma_device_id": "system", 00:04:44.358 "dma_device_type": 1 00:04:44.358 }, 00:04:44.358 { 00:04:44.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.358 "dma_device_type": 2 00:04:44.358 } 00:04:44.358 ], 00:04:44.358 "driver_specific": { 00:04:44.358 "passthru": { 00:04:44.358 "name": "Passthru0", 00:04:44.358 "base_bdev_name": "Malloc2" 00:04:44.358 } 00:04:44.358 } 00:04:44.358 } 00:04:44.358 ]' 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.358 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.359 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.616 00:04:44.616 real 0m0.338s 00:04:44.616 user 0m0.216s 00:04:44.616 sys 0m0.038s 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.616 ************************************ 00:04:44.616 END TEST rpc_daemon_integrity 00:04:44.616 02:49:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 ************************************ 00:04:44.616 02:49:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.616 02:49:15 rpc -- rpc/rpc.sh@84 -- # killprocess 57438 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 57438 ']' 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@958 -- # kill -0 57438 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57438 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.616 killing process with pid 57438 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57438' 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@973 -- # kill 57438 00:04:44.616 02:49:15 rpc -- common/autotest_common.sh@978 -- # wait 57438 00:04:46.518 00:04:46.518 real 0m4.724s 00:04:46.518 user 0m5.510s 00:04:46.518 sys 0m0.839s 00:04:46.518 02:49:17 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.518 02:49:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 END TEST rpc 00:04:46.518 ************************************ 00:04:46.518 02:49:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.518 02:49:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.518 02:49:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.518 02:49:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 START TEST skip_rpc 00:04:46.518 ************************************ 00:04:46.518 02:49:17 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.776 * Looking for test storage... 00:04:46.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.776 02:49:17 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.776 02:49:17 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.776 02:49:17 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.776 02:49:17 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.776 02:49:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.777 02:49:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.777 02:49:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.777 02:49:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.777 02:49:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.777 --rc genhtml_branch_coverage=1 00:04:46.777 --rc genhtml_function_coverage=1 00:04:46.777 --rc genhtml_legend=1 00:04:46.777 --rc geninfo_all_blocks=1 00:04:46.777 --rc geninfo_unexecuted_blocks=1 00:04:46.777 00:04:46.777 ' 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.777 --rc genhtml_branch_coverage=1 00:04:46.777 --rc genhtml_function_coverage=1 00:04:46.777 --rc genhtml_legend=1 00:04:46.777 --rc geninfo_all_blocks=1 00:04:46.777 --rc geninfo_unexecuted_blocks=1 00:04:46.777 00:04:46.777 ' 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.777 --rc genhtml_branch_coverage=1 00:04:46.777 --rc genhtml_function_coverage=1 00:04:46.777 --rc genhtml_legend=1 00:04:46.777 --rc geninfo_all_blocks=1 00:04:46.777 --rc geninfo_unexecuted_blocks=1 00:04:46.777 00:04:46.777 ' 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.777 --rc genhtml_branch_coverage=1 00:04:46.777 --rc genhtml_function_coverage=1 00:04:46.777 --rc genhtml_legend=1 00:04:46.777 --rc geninfo_all_blocks=1 00:04:46.777 --rc geninfo_unexecuted_blocks=1 00:04:46.777 00:04:46.777 ' 00:04:46.777 02:49:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.777 02:49:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.777 02:49:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.777 02:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.777 ************************************ 00:04:46.777 START TEST skip_rpc 00:04:46.777 ************************************ 00:04:46.777 02:49:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:46.777 02:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57661 00:04:46.777 02:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.777 02:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.777 02:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.777 [2024-12-05 02:49:17.612183] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:46.777 [2024-12-05 02:49:17.612421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57661 ] 00:04:47.035 [2024-12-05 02:49:17.787632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.035 [2024-12-05 02:49:17.874721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.294 [2024-12-05 02:49:18.072736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57661 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57661 ']' 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57661 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57661 00:04:52.596 killing process with pid 57661 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57661' 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57661 00:04:52.596 02:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57661 00:04:53.534 00:04:53.534 real 0m6.790s 00:04:53.534 user 0m6.359s 00:04:53.534 sys 0m0.324s 00:04:53.534 02:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.534 ************************************ 00:04:53.534 END TEST skip_rpc 00:04:53.534 ************************************ 00:04:53.534 02:49:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.534 02:49:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:53.534 02:49:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.534 02:49:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.534 02:49:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.534 ************************************ 00:04:53.534 START TEST skip_rpc_with_json 00:04:53.534 ************************************ 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57760 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57760 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57760 ']' 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.534 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.794 [2024-12-05 02:49:24.448266] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:53.794 [2024-12-05 02:49:24.448429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57760 ] 00:04:53.794 [2024-12-05 02:49:24.619755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.053 [2024-12-05 02:49:24.706983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.053 [2024-12-05 02:49:24.892793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.622 [2024-12-05 02:49:25.372819] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.622 request: 00:04:54.622 { 00:04:54.622 "trtype": "tcp", 00:04:54.622 "method": "nvmf_get_transports", 00:04:54.622 "req_id": 1 00:04:54.622 } 00:04:54.622 Got JSON-RPC error response 00:04:54.622 response: 00:04:54.622 { 00:04:54.622 "code": -19, 00:04:54.622 "message": "No such device" 00:04:54.622 } 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.622 [2024-12-05 02:49:25.384928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.622 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.881 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.881 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.881 { 00:04:54.881 "subsystems": [ 00:04:54.881 { 00:04:54.881 "subsystem": "fsdev", 00:04:54.881 "config": [ 00:04:54.881 { 00:04:54.881 "method": "fsdev_set_opts", 00:04:54.881 "params": { 00:04:54.881 "fsdev_io_pool_size": 65535, 00:04:54.881 "fsdev_io_cache_size": 256 00:04:54.881 } 00:04:54.881 } 00:04:54.881 ] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "vfio_user_target", 00:04:54.881 "config": null 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "keyring", 00:04:54.881 "config": [] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "iobuf", 00:04:54.881 "config": [ 00:04:54.881 { 00:04:54.881 "method": "iobuf_set_options", 00:04:54.881 "params": { 00:04:54.881 "small_pool_count": 8192, 00:04:54.881 "large_pool_count": 1024, 00:04:54.881 "small_bufsize": 8192, 00:04:54.881 "large_bufsize": 135168, 00:04:54.881 "enable_numa": false 00:04:54.881 } 00:04:54.881 } 00:04:54.881 ] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "sock", 00:04:54.881 "config": [ 00:04:54.881 { 00:04:54.881 "method": "sock_set_default_impl", 00:04:54.881 "params": { 00:04:54.881 "impl_name": "uring" 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "sock_impl_set_options", 00:04:54.881 "params": { 00:04:54.881 "impl_name": "ssl", 00:04:54.881 "recv_buf_size": 4096, 00:04:54.881 "send_buf_size": 4096, 00:04:54.881 "enable_recv_pipe": true, 00:04:54.881 "enable_quickack": false, 00:04:54.881 "enable_placement_id": 0, 00:04:54.881 "enable_zerocopy_send_server": true, 00:04:54.881 "enable_zerocopy_send_client": false, 00:04:54.881 "zerocopy_threshold": 0, 00:04:54.881 "tls_version": 0, 00:04:54.881 "enable_ktls": false 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "sock_impl_set_options", 00:04:54.881 "params": { 00:04:54.881 "impl_name": "posix", 00:04:54.881 "recv_buf_size": 2097152, 00:04:54.881 "send_buf_size": 2097152, 00:04:54.881 "enable_recv_pipe": true, 00:04:54.881 "enable_quickack": false, 00:04:54.881 "enable_placement_id": 0, 00:04:54.881 "enable_zerocopy_send_server": true, 00:04:54.881 "enable_zerocopy_send_client": false, 00:04:54.881 "zerocopy_threshold": 0, 00:04:54.881 "tls_version": 0, 00:04:54.881 "enable_ktls": false 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "sock_impl_set_options", 00:04:54.881 "params": { 00:04:54.881 "impl_name": "uring", 00:04:54.881 "recv_buf_size": 2097152, 00:04:54.881 "send_buf_size": 2097152, 00:04:54.881 "enable_recv_pipe": true, 00:04:54.881 "enable_quickack": false, 00:04:54.881 "enable_placement_id": 0, 00:04:54.881 "enable_zerocopy_send_server": false, 00:04:54.881 "enable_zerocopy_send_client": false, 00:04:54.881 "zerocopy_threshold": 0, 00:04:54.881 "tls_version": 0, 00:04:54.881 "enable_ktls": false 00:04:54.881 } 00:04:54.881 } 00:04:54.881 ] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "vmd", 00:04:54.881 "config": [] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "accel", 00:04:54.881 "config": [ 00:04:54.881 { 00:04:54.881 "method": "accel_set_options", 00:04:54.881 "params": { 00:04:54.881 "small_cache_size": 128, 00:04:54.881 "large_cache_size": 16, 00:04:54.881 "task_count": 2048, 00:04:54.881 "sequence_count": 2048, 00:04:54.881 "buf_count": 2048 00:04:54.881 } 00:04:54.881 } 00:04:54.881 ] 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "subsystem": "bdev", 00:04:54.881 "config": [ 00:04:54.881 { 00:04:54.881 "method": "bdev_set_options", 00:04:54.881 "params": { 00:04:54.881 "bdev_io_pool_size": 65535, 00:04:54.881 "bdev_io_cache_size": 256, 00:04:54.881 "bdev_auto_examine": true, 00:04:54.881 "iobuf_small_cache_size": 128, 00:04:54.881 "iobuf_large_cache_size": 16 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "bdev_raid_set_options", 00:04:54.881 "params": { 00:04:54.881 "process_window_size_kb": 1024, 00:04:54.881 "process_max_bandwidth_mb_sec": 0 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "bdev_iscsi_set_options", 00:04:54.881 "params": { 00:04:54.881 "timeout_sec": 30 00:04:54.881 } 00:04:54.881 }, 00:04:54.881 { 00:04:54.881 "method": "bdev_nvme_set_options", 00:04:54.881 "params": { 00:04:54.881 "action_on_timeout": "none", 00:04:54.881 "timeout_us": 0, 00:04:54.881 "timeout_admin_us": 0, 00:04:54.881 "keep_alive_timeout_ms": 10000, 00:04:54.881 "arbitration_burst": 0, 00:04:54.881 "low_priority_weight": 0, 00:04:54.881 "medium_priority_weight": 0, 00:04:54.881 "high_priority_weight": 0, 00:04:54.881 "nvme_adminq_poll_period_us": 10000, 00:04:54.881 "nvme_ioq_poll_period_us": 0, 00:04:54.881 "io_queue_requests": 0, 00:04:54.881 "delay_cmd_submit": true, 00:04:54.881 "transport_retry_count": 4, 00:04:54.881 "bdev_retry_count": 3, 00:04:54.881 "transport_ack_timeout": 0, 00:04:54.881 "ctrlr_loss_timeout_sec": 0, 00:04:54.881 "reconnect_delay_sec": 0, 00:04:54.881 "fast_io_fail_timeout_sec": 0, 00:04:54.881 "disable_auto_failback": false, 00:04:54.881 "generate_uuids": false, 00:04:54.881 "transport_tos": 0, 00:04:54.881 "nvme_error_stat": false, 00:04:54.881 "rdma_srq_size": 0, 00:04:54.881 "io_path_stat": false, 00:04:54.881 "allow_accel_sequence": false, 00:04:54.881 "rdma_max_cq_size": 0, 00:04:54.881 "rdma_cm_event_timeout_ms": 0, 00:04:54.881 "dhchap_digests": [ 00:04:54.881 "sha256", 00:04:54.881 "sha384", 00:04:54.881 "sha512" 00:04:54.881 ], 00:04:54.881 "dhchap_dhgroups": [ 00:04:54.882 "null", 00:04:54.882 "ffdhe2048", 00:04:54.882 "ffdhe3072", 00:04:54.882 "ffdhe4096", 00:04:54.882 "ffdhe6144", 00:04:54.882 "ffdhe8192" 00:04:54.882 ] 00:04:54.882 } 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "method": "bdev_nvme_set_hotplug", 00:04:54.882 "params": { 00:04:54.882 "period_us": 100000, 00:04:54.882 "enable": false 00:04:54.882 } 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "method": "bdev_wait_for_examine" 00:04:54.882 } 00:04:54.882 ] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "scsi", 00:04:54.882 "config": null 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "scheduler", 00:04:54.882 "config": [ 00:04:54.882 { 00:04:54.882 "method": "framework_set_scheduler", 00:04:54.882 "params": { 00:04:54.882 "name": "static" 00:04:54.882 } 00:04:54.882 } 00:04:54.882 ] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "vhost_scsi", 00:04:54.882 "config": [] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "vhost_blk", 00:04:54.882 "config": [] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "ublk", 00:04:54.882 "config": [] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "nbd", 00:04:54.882 "config": [] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "nvmf", 00:04:54.882 "config": [ 00:04:54.882 { 00:04:54.882 "method": "nvmf_set_config", 00:04:54.882 "params": { 00:04:54.882 "discovery_filter": "match_any", 00:04:54.882 "admin_cmd_passthru": { 00:04:54.882 "identify_ctrlr": false 00:04:54.882 }, 00:04:54.882 "dhchap_digests": [ 00:04:54.882 "sha256", 00:04:54.882 "sha384", 00:04:54.882 "sha512" 00:04:54.882 ], 00:04:54.882 "dhchap_dhgroups": [ 00:04:54.882 "null", 00:04:54.882 "ffdhe2048", 00:04:54.882 "ffdhe3072", 00:04:54.882 "ffdhe4096", 00:04:54.882 "ffdhe6144", 00:04:54.882 "ffdhe8192" 00:04:54.882 ] 00:04:54.882 } 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "method": "nvmf_set_max_subsystems", 00:04:54.882 "params": { 00:04:54.882 "max_subsystems": 1024 00:04:54.882 } 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "method": "nvmf_set_crdt", 00:04:54.882 "params": { 00:04:54.882 "crdt1": 0, 00:04:54.882 "crdt2": 0, 00:04:54.882 "crdt3": 0 00:04:54.882 } 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "method": "nvmf_create_transport", 00:04:54.882 "params": { 00:04:54.882 "trtype": "TCP", 00:04:54.882 "max_queue_depth": 128, 00:04:54.882 "max_io_qpairs_per_ctrlr": 127, 00:04:54.882 "in_capsule_data_size": 4096, 00:04:54.882 "max_io_size": 131072, 00:04:54.882 "io_unit_size": 131072, 00:04:54.882 "max_aq_depth": 128, 00:04:54.882 "num_shared_buffers": 511, 00:04:54.882 "buf_cache_size": 4294967295, 00:04:54.882 "dif_insert_or_strip": false, 00:04:54.882 "zcopy": false, 00:04:54.882 "c2h_success": true, 00:04:54.882 "sock_priority": 0, 00:04:54.882 "abort_timeout_sec": 1, 00:04:54.882 "ack_timeout": 0, 00:04:54.882 "data_wr_pool_size": 0 00:04:54.882 } 00:04:54.882 } 00:04:54.882 ] 00:04:54.882 }, 00:04:54.882 { 00:04:54.882 "subsystem": "iscsi", 00:04:54.882 "config": [ 00:04:54.882 { 00:04:54.882 "method": "iscsi_set_options", 00:04:54.882 "params": { 00:04:54.882 "node_base": "iqn.2016-06.io.spdk", 00:04:54.882 "max_sessions": 128, 00:04:54.882 "max_connections_per_session": 2, 00:04:54.882 "max_queue_depth": 64, 00:04:54.882 "default_time2wait": 2, 00:04:54.882 "default_time2retain": 20, 00:04:54.882 "first_burst_length": 8192, 00:04:54.882 "immediate_data": true, 00:04:54.882 "allow_duplicated_isid": false, 00:04:54.882 "error_recovery_level": 0, 00:04:54.882 "nop_timeout": 60, 00:04:54.882 "nop_in_interval": 30, 00:04:54.882 "disable_chap": false, 00:04:54.882 "require_chap": false, 00:04:54.882 "mutual_chap": false, 00:04:54.882 "chap_group": 0, 00:04:54.882 "max_large_datain_per_connection": 64, 00:04:54.882 "max_r2t_per_connection": 4, 00:04:54.882 "pdu_pool_size": 36864, 00:04:54.882 "immediate_data_pool_size": 16384, 00:04:54.882 "data_out_pool_size": 2048 00:04:54.882 } 00:04:54.882 } 00:04:54.882 ] 00:04:54.882 } 00:04:54.882 ] 00:04:54.882 } 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57760 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57760 ']' 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57760 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57760 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.882 killing process with pid 57760 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57760' 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57760 00:04:54.882 02:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57760 00:04:56.786 02:49:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57805 00:04:56.786 02:49:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:56.786 02:49:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57805 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57805 ']' 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57805 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57805 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.056 killing process with pid 57805 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57805' 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57805 00:05:02.056 02:49:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57805 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.436 00:05:03.436 real 0m9.882s 00:05:03.436 user 0m9.476s 00:05:03.436 sys 0m0.773s 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.436 ************************************ 00:05:03.436 END TEST skip_rpc_with_json 00:05:03.436 ************************************ 00:05:03.436 02:49:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:03.436 02:49:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.436 02:49:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.436 02:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.436 ************************************ 00:05:03.436 START TEST skip_rpc_with_delay 00:05:03.436 ************************************ 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:03.436 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.695 [2024-12-05 02:49:34.389600] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:03.695 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:03.695 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.695 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.695 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.695 00:05:03.695 real 0m0.198s 00:05:03.695 user 0m0.111s 00:05:03.696 sys 0m0.085s 00:05:03.696 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.696 02:49:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:03.696 ************************************ 00:05:03.696 END TEST skip_rpc_with_delay 00:05:03.696 ************************************ 00:05:03.696 02:49:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:03.696 02:49:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:03.696 02:49:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:03.696 02:49:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.696 02:49:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.696 02:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.696 ************************************ 00:05:03.696 START TEST exit_on_failed_rpc_init 00:05:03.696 ************************************ 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57933 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57933 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57933 ']' 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.696 02:49:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.955 [2024-12-05 02:49:34.638868] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:03.955 [2024-12-05 02:49:34.639065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 00:05:04.214 [2024-12-05 02:49:34.824807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.214 [2024-12-05 02:49:34.906385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.473 [2024-12-05 02:49:35.090771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.730 02:49:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.989 [2024-12-05 02:49:35.688233] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:04.989 [2024-12-05 02:49:35.688426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:05:05.248 [2024-12-05 02:49:35.866566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.248 [2024-12-05 02:49:35.953400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.248 [2024-12-05 02:49:35.953539] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.248 [2024-12-05 02:49:35.953562] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.248 [2024-12-05 02:49:35.953592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57933 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57933 ']' 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57933 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57933 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.508 killing process with pid 57933 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57933' 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57933 00:05:05.508 02:49:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57933 00:05:07.414 00:05:07.414 real 0m3.409s 00:05:07.414 user 0m3.804s 00:05:07.414 sys 0m0.533s 00:05:07.414 02:49:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.414 ************************************ 00:05:07.414 END TEST exit_on_failed_rpc_init 00:05:07.414 ************************************ 00:05:07.414 02:49:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.414 02:49:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.414 00:05:07.414 real 0m20.676s 00:05:07.414 user 0m19.926s 00:05:07.414 sys 0m1.924s 00:05:07.414 ************************************ 00:05:07.414 END TEST skip_rpc 00:05:07.414 ************************************ 00:05:07.414 02:49:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.414 02:49:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.414 02:49:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.414 02:49:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.414 02:49:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.414 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.414 ************************************ 00:05:07.414 START TEST rpc_client 00:05:07.414 ************************************ 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.414 * Looking for test storage... 00:05:07.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.414 02:49:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.414 --rc genhtml_branch_coverage=1 00:05:07.414 --rc genhtml_function_coverage=1 00:05:07.414 --rc genhtml_legend=1 00:05:07.414 --rc geninfo_all_blocks=1 00:05:07.414 --rc geninfo_unexecuted_blocks=1 00:05:07.414 00:05:07.414 ' 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.414 --rc genhtml_branch_coverage=1 00:05:07.414 --rc genhtml_function_coverage=1 00:05:07.414 --rc genhtml_legend=1 00:05:07.414 --rc geninfo_all_blocks=1 00:05:07.414 --rc geninfo_unexecuted_blocks=1 00:05:07.414 00:05:07.414 ' 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.414 --rc genhtml_branch_coverage=1 00:05:07.414 --rc genhtml_function_coverage=1 00:05:07.414 --rc genhtml_legend=1 00:05:07.414 --rc geninfo_all_blocks=1 00:05:07.414 --rc geninfo_unexecuted_blocks=1 00:05:07.414 00:05:07.414 ' 00:05:07.414 02:49:38 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.414 --rc genhtml_branch_coverage=1 00:05:07.414 --rc genhtml_function_coverage=1 00:05:07.414 --rc genhtml_legend=1 00:05:07.414 --rc geninfo_all_blocks=1 00:05:07.414 --rc geninfo_unexecuted_blocks=1 00:05:07.414 00:05:07.414 ' 00:05:07.414 02:49:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:07.414 OK 00:05:07.675 02:49:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.675 00:05:07.675 real 0m0.246s 00:05:07.675 user 0m0.135s 00:05:07.675 sys 0m0.117s 00:05:07.675 02:49:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.675 02:49:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.675 ************************************ 00:05:07.675 END TEST rpc_client 00:05:07.675 ************************************ 00:05:07.675 02:49:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.675 02:49:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.675 02:49:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.675 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.675 ************************************ 00:05:07.675 START TEST json_config 00:05:07.675 ************************************ 00:05:07.675 02:49:38 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.675 02:49:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.675 02:49:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.675 02:49:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.675 02:49:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.675 02:49:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.675 02:49:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.675 02:49:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.675 02:49:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.675 02:49:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.675 02:49:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.675 02:49:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.675 02:49:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.675 02:49:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.675 02:49:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:07.675 02:49:38 json_config -- scripts/common.sh@345 -- # : 1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.675 02:49:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.675 02:49:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@353 -- # local d=1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.675 02:49:38 json_config -- scripts/common.sh@355 -- # echo 1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.675 02:49:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:07.676 02:49:38 json_config -- scripts/common.sh@353 -- # local d=2 00:05:07.676 02:49:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.676 02:49:38 json_config -- scripts/common.sh@355 -- # echo 2 00:05:07.676 02:49:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.676 02:49:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.676 02:49:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.676 02:49:38 json_config -- scripts/common.sh@368 -- # return 0 00:05:07.676 02:49:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.676 02:49:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.676 --rc genhtml_branch_coverage=1 00:05:07.676 --rc genhtml_function_coverage=1 00:05:07.676 --rc genhtml_legend=1 00:05:07.676 --rc geninfo_all_blocks=1 00:05:07.676 --rc geninfo_unexecuted_blocks=1 00:05:07.676 00:05:07.676 ' 00:05:07.676 02:49:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.676 --rc genhtml_branch_coverage=1 00:05:07.676 --rc genhtml_function_coverage=1 00:05:07.676 --rc genhtml_legend=1 00:05:07.676 --rc geninfo_all_blocks=1 00:05:07.676 --rc geninfo_unexecuted_blocks=1 00:05:07.676 00:05:07.676 ' 00:05:07.676 02:49:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.676 --rc genhtml_branch_coverage=1 00:05:07.676 --rc genhtml_function_coverage=1 00:05:07.676 --rc genhtml_legend=1 00:05:07.676 --rc geninfo_all_blocks=1 00:05:07.676 --rc geninfo_unexecuted_blocks=1 00:05:07.676 00:05:07.676 ' 00:05:07.676 02:49:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.676 --rc genhtml_branch_coverage=1 00:05:07.676 --rc genhtml_function_coverage=1 00:05:07.676 --rc genhtml_legend=1 00:05:07.676 --rc geninfo_all_blocks=1 00:05:07.676 --rc geninfo_unexecuted_blocks=1 00:05:07.676 00:05:07.676 ' 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.676 02:49:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.676 02:49:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.676 02:49:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.676 02:49:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.676 02:49:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.676 02:49:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.676 02:49:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.676 02:49:38 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.676 02:49:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@51 -- # : 0 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.676 02:49:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.676 INFO: JSON configuration test init 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.676 02:49:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.677 02:49:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.677 02:49:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.677 02:49:38 json_config -- json_config/common.sh@10 -- # shift 00:05:07.677 02:49:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.677 02:49:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.677 02:49:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.677 02:49:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.677 02:49:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.677 02:49:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58110 00:05:07.677 02:49:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.677 Waiting for target to run... 00:05:07.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.677 02:49:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.677 02:49:38 json_config -- json_config/common.sh@25 -- # waitforlisten 58110 /var/tmp/spdk_tgt.sock 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 58110 ']' 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.677 02:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.936 [2024-12-05 02:49:38.638548] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:07.936 [2024-12-05 02:49:38.639119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:05:08.196 [2024-12-05 02:49:38.982459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.454 [2024-12-05 02:49:39.057413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.714 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:08.714 02:49:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.714 02:49:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:08.714 02:49:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.714 02:49:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:08.714 02:49:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.714 02:49:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.973 02:49:39 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.973 02:49:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:08.973 02:49:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.233 [2024-12-05 02:49:39.971315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.801 02:49:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.801 02:49:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.801 02:49:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.801 02:49:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@54 -- # sort 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:10.061 02:49:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.061 02:49:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:10.061 02:49:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.061 02:49:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:10.061 02:49:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.061 02:49:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.319 MallocForNvmf0 00:05:10.319 02:49:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.319 02:49:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.886 MallocForNvmf1 00:05:10.886 02:49:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.886 02:49:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.886 [2024-12-05 02:49:41.701483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.886 02:49:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.886 02:49:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.454 02:49:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.454 02:49:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.454 02:49:42 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.454 02:49:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.713 02:49:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.713 02:49:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.972 [2024-12-05 02:49:42.650340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.972 02:49:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:11.972 02:49:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.972 02:49:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.972 02:49:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:11.972 02:49:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.972 02:49:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.972 02:49:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:11.972 02:49:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.972 02:49:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.232 MallocBdevForConfigChangeCheck 00:05:12.232 02:49:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:12.232 02:49:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.232 02:49:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.492 02:49:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:12.492 02:49:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.752 INFO: shutting down applications... 00:05:12.752 02:49:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:12.752 02:49:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:12.752 02:49:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:12.752 02:49:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:12.752 02:49:43 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.321 Calling clear_iscsi_subsystem 00:05:13.321 Calling clear_nvmf_subsystem 00:05:13.321 Calling clear_nbd_subsystem 00:05:13.321 Calling clear_ublk_subsystem 00:05:13.321 Calling clear_vhost_blk_subsystem 00:05:13.321 Calling clear_vhost_scsi_subsystem 00:05:13.321 Calling clear_bdev_subsystem 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.321 02:49:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.581 02:49:44 json_config -- json_config/json_config.sh@352 -- # break 00:05:13.581 02:49:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:13.581 02:49:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:13.581 02:49:44 json_config -- json_config/common.sh@31 -- # local app=target 00:05:13.581 02:49:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.581 02:49:44 json_config -- json_config/common.sh@35 -- # [[ -n 58110 ]] 00:05:13.581 02:49:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58110 00:05:13.581 02:49:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.581 02:49:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.581 02:49:44 json_config -- json_config/common.sh@41 -- # kill -0 58110 00:05:13.581 02:49:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.149 02:49:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.149 02:49:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.149 02:49:44 json_config -- json_config/common.sh@41 -- # kill -0 58110 00:05:14.149 02:49:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.718 02:49:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.718 02:49:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.718 02:49:45 json_config -- json_config/common.sh@41 -- # kill -0 58110 00:05:14.718 02:49:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.718 02:49:45 json_config -- json_config/common.sh@43 -- # break 00:05:14.718 SPDK target shutdown done 00:05:14.718 INFO: relaunching applications... 00:05:14.718 02:49:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.718 02:49:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.718 02:49:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:14.718 02:49:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.718 02:49:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.718 02:49:45 json_config -- json_config/common.sh@10 -- # shift 00:05:14.718 02:49:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.718 02:49:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.718 02:49:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.718 02:49:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.718 02:49:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.718 02:49:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58318 00:05:14.718 02:49:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.718 02:49:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.718 Waiting for target to run... 00:05:14.718 02:49:45 json_config -- json_config/common.sh@25 -- # waitforlisten 58318 /var/tmp/spdk_tgt.sock 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 58318 ']' 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.718 02:49:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 [2024-12-05 02:49:45.446868] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:14.718 [2024-12-05 02:49:45.447237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:05:15.025 [2024-12-05 02:49:45.768209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.312 [2024-12-05 02:49:45.849543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.312 [2024-12-05 02:49:46.130380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.880 [2024-12-05 02:49:46.690344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.880 [2024-12-05 02:49:46.722546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.140 00:05:16.140 INFO: Checking if target configuration is the same... 00:05:16.140 02:49:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.140 02:49:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.140 02:49:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.140 02:49:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:16.140 02:49:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:16.140 02:49:46 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.140 02:49:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:16.140 02:49:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.140 + '[' 2 -ne 2 ']' 00:05:16.140 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:16.140 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:16.140 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:16.140 +++ basename /dev/fd/62 00:05:16.140 ++ mktemp /tmp/62.XXX 00:05:16.140 + tmp_file_1=/tmp/62.vRY 00:05:16.140 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.140 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.140 + tmp_file_2=/tmp/spdk_tgt_config.json.bOw 00:05:16.140 + ret=0 00:05:16.140 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:16.400 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:16.660 + diff -u /tmp/62.vRY /tmp/spdk_tgt_config.json.bOw 00:05:16.660 INFO: JSON config files are the same 00:05:16.660 + echo 'INFO: JSON config files are the same' 00:05:16.660 + rm /tmp/62.vRY /tmp/spdk_tgt_config.json.bOw 00:05:16.660 + exit 0 00:05:16.660 02:49:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:16.660 INFO: changing configuration and checking if this can be detected... 00:05:16.660 02:49:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:16.660 02:49:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.660 02:49:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.919 02:49:47 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.919 02:49:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:16.919 02:49:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.919 + '[' 2 -ne 2 ']' 00:05:16.919 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:16.919 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:16.920 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:16.920 +++ basename /dev/fd/62 00:05:16.920 ++ mktemp /tmp/62.XXX 00:05:16.920 + tmp_file_1=/tmp/62.us5 00:05:16.920 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.920 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.920 + tmp_file_2=/tmp/spdk_tgt_config.json.lww 00:05:16.920 + ret=0 00:05:16.920 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:17.179 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:17.437 + diff -u /tmp/62.us5 /tmp/spdk_tgt_config.json.lww 00:05:17.437 + ret=1 00:05:17.437 + echo '=== Start of file: /tmp/62.us5 ===' 00:05:17.437 + cat /tmp/62.us5 00:05:17.437 + echo '=== End of file: /tmp/62.us5 ===' 00:05:17.437 + echo '' 00:05:17.437 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lww ===' 00:05:17.437 + cat /tmp/spdk_tgt_config.json.lww 00:05:17.437 + echo '=== End of file: /tmp/spdk_tgt_config.json.lww ===' 00:05:17.437 + echo '' 00:05:17.437 + rm /tmp/62.us5 /tmp/spdk_tgt_config.json.lww 00:05:17.437 + exit 1 00:05:17.437 INFO: configuration change detected. 00:05:17.437 02:49:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:17.437 02:49:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:17.437 02:49:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:17.437 02:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 58318 ]] 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.438 02:49:48 json_config -- json_config/json_config.sh@330 -- # killprocess 58318 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 58318 ']' 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@958 -- # kill -0 58318 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@959 -- # uname 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58318 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.438 killing process with pid 58318 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58318' 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@973 -- # kill 58318 00:05:17.438 02:49:48 json_config -- common/autotest_common.sh@978 -- # wait 58318 00:05:18.373 02:49:48 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.373 02:49:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:18.373 02:49:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.373 02:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.373 02:49:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.373 INFO: Success 00:05:18.373 02:49:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.373 00:05:18.373 real 0m10.608s 00:05:18.373 user 0m14.416s 00:05:18.373 sys 0m1.704s 00:05:18.373 02:49:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.373 ************************************ 00:05:18.373 END TEST json_config 00:05:18.373 ************************************ 00:05:18.373 02:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.373 02:49:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.373 02:49:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.373 02:49:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.373 02:49:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.373 ************************************ 00:05:18.373 START TEST json_config_extra_key 00:05:18.373 ************************************ 00:05:18.373 02:49:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.373 02:49:49 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.373 02:49:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.373 02:49:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.373 02:49:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.373 02:49:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.374 --rc genhtml_branch_coverage=1 00:05:18.374 --rc genhtml_function_coverage=1 00:05:18.374 --rc genhtml_legend=1 00:05:18.374 --rc geninfo_all_blocks=1 00:05:18.374 --rc geninfo_unexecuted_blocks=1 00:05:18.374 00:05:18.374 ' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.374 --rc genhtml_branch_coverage=1 00:05:18.374 --rc genhtml_function_coverage=1 00:05:18.374 --rc genhtml_legend=1 00:05:18.374 --rc geninfo_all_blocks=1 00:05:18.374 --rc geninfo_unexecuted_blocks=1 00:05:18.374 00:05:18.374 ' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.374 --rc genhtml_branch_coverage=1 00:05:18.374 --rc genhtml_function_coverage=1 00:05:18.374 --rc genhtml_legend=1 00:05:18.374 --rc geninfo_all_blocks=1 00:05:18.374 --rc geninfo_unexecuted_blocks=1 00:05:18.374 00:05:18.374 ' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.374 --rc genhtml_branch_coverage=1 00:05:18.374 --rc genhtml_function_coverage=1 00:05:18.374 --rc genhtml_legend=1 00:05:18.374 --rc geninfo_all_blocks=1 00:05:18.374 --rc geninfo_unexecuted_blocks=1 00:05:18.374 00:05:18.374 ' 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.374 02:49:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.374 02:49:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.374 02:49:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.374 02:49:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.374 02:49:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.374 02:49:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.374 02:49:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.374 INFO: launching applications... 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.374 02:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58479 00:05:18.374 Waiting for target to run... 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58479 /var/tmp/spdk_tgt.sock 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58479 ']' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.374 02:49:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.374 02:49:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.633 [2024-12-05 02:49:49.258865] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:18.633 [2024-12-05 02:49:49.259014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58479 ] 00:05:18.892 [2024-12-05 02:49:49.584243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.892 [2024-12-05 02:49:49.659436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.151 [2024-12-05 02:49:49.826930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.410 02:49:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.410 02:49:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:19.410 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.410 INFO: shutting down applications... 00:05:19.410 02:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.410 02:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58479 ]] 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58479 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:05:19.410 02:49:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.979 02:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.979 02:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.979 02:49:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:05:19.979 02:49:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.546 02:49:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.546 02:49:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.546 02:49:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:05:20.546 02:49:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.116 02:49:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.116 02:49:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.116 02:49:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:05:21.116 02:49:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.685 SPDK target shutdown done 00:05:21.685 02:49:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.685 Success 00:05:21.685 02:49:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.685 00:05:21.685 real 0m3.269s 00:05:21.685 user 0m3.230s 00:05:21.685 sys 0m0.446s 00:05:21.685 02:49:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.685 ************************************ 00:05:21.685 END TEST json_config_extra_key 00:05:21.685 ************************************ 00:05:21.685 02:49:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.685 02:49:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.685 02:49:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.685 02:49:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.686 02:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:21.686 ************************************ 00:05:21.686 START TEST alias_rpc 00:05:21.686 ************************************ 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.686 * Looking for test storage... 00:05:21.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.686 02:49:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.686 --rc genhtml_branch_coverage=1 00:05:21.686 --rc genhtml_function_coverage=1 00:05:21.686 --rc genhtml_legend=1 00:05:21.686 --rc geninfo_all_blocks=1 00:05:21.686 --rc geninfo_unexecuted_blocks=1 00:05:21.686 00:05:21.686 ' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.686 --rc genhtml_branch_coverage=1 00:05:21.686 --rc genhtml_function_coverage=1 00:05:21.686 --rc genhtml_legend=1 00:05:21.686 --rc geninfo_all_blocks=1 00:05:21.686 --rc geninfo_unexecuted_blocks=1 00:05:21.686 00:05:21.686 ' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.686 --rc genhtml_branch_coverage=1 00:05:21.686 --rc genhtml_function_coverage=1 00:05:21.686 --rc genhtml_legend=1 00:05:21.686 --rc geninfo_all_blocks=1 00:05:21.686 --rc geninfo_unexecuted_blocks=1 00:05:21.686 00:05:21.686 ' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.686 --rc genhtml_branch_coverage=1 00:05:21.686 --rc genhtml_function_coverage=1 00:05:21.686 --rc genhtml_legend=1 00:05:21.686 --rc geninfo_all_blocks=1 00:05:21.686 --rc geninfo_unexecuted_blocks=1 00:05:21.686 00:05:21.686 ' 00:05:21.686 02:49:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.686 02:49:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58576 00:05:21.686 02:49:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58576 00:05:21.686 02:49:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58576 ']' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.686 02:49:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.945 [2024-12-05 02:49:52.600163] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:21.945 [2024-12-05 02:49:52.600389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:05:21.945 [2024-12-05 02:49:52.776436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.203 [2024-12-05 02:49:52.859380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.203 [2024-12-05 02:49:53.041523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.770 02:49:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.770 02:49:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.770 02:49:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.029 02:49:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58576 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58576 ']' 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58576 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58576 00:05:23.029 killing process with pid 58576 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58576' 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 58576 00:05:23.029 02:49:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 58576 00:05:24.934 ************************************ 00:05:24.934 END TEST alias_rpc 00:05:24.934 ************************************ 00:05:24.934 00:05:24.934 real 0m3.258s 00:05:24.934 user 0m3.469s 00:05:24.934 sys 0m0.487s 00:05:24.934 02:49:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.934 02:49:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.934 02:49:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:24.934 02:49:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:24.934 02:49:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.934 02:49:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.934 02:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.934 ************************************ 00:05:24.934 START TEST spdkcli_tcp 00:05:24.934 ************************************ 00:05:24.934 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:24.934 * Looking for test storage... 00:05:24.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:24.934 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.934 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.934 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.934 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.934 02:49:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.193 02:49:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:25.193 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.193 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.193 --rc genhtml_branch_coverage=1 00:05:25.193 --rc genhtml_function_coverage=1 00:05:25.193 --rc genhtml_legend=1 00:05:25.193 --rc geninfo_all_blocks=1 00:05:25.193 --rc geninfo_unexecuted_blocks=1 00:05:25.193 00:05:25.193 ' 00:05:25.193 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.193 --rc genhtml_branch_coverage=1 00:05:25.193 --rc genhtml_function_coverage=1 00:05:25.193 --rc genhtml_legend=1 00:05:25.193 --rc geninfo_all_blocks=1 00:05:25.193 --rc geninfo_unexecuted_blocks=1 00:05:25.193 00:05:25.193 ' 00:05:25.193 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.193 --rc genhtml_branch_coverage=1 00:05:25.193 --rc genhtml_function_coverage=1 00:05:25.193 --rc genhtml_legend=1 00:05:25.193 --rc geninfo_all_blocks=1 00:05:25.193 --rc geninfo_unexecuted_blocks=1 00:05:25.193 00:05:25.193 ' 00:05:25.193 02:49:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.194 --rc genhtml_branch_coverage=1 00:05:25.194 --rc genhtml_function_coverage=1 00:05:25.194 --rc genhtml_legend=1 00:05:25.194 --rc geninfo_all_blocks=1 00:05:25.194 --rc geninfo_unexecuted_blocks=1 00:05:25.194 00:05:25.194 ' 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58679 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58679 00:05:25.194 02:49:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58679 ']' 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.194 02:49:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.194 [2024-12-05 02:49:55.916797] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:25.194 [2024-12-05 02:49:55.916968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:05:25.453 [2024-12-05 02:49:56.094384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.453 [2024-12-05 02:49:56.181173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.453 [2024-12-05 02:49:56.181187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.712 [2024-12-05 02:49:56.372785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.280 02:49:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.280 02:49:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:26.280 02:49:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58696 00:05:26.280 02:49:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.280 02:49:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.540 [ 00:05:26.540 "bdev_malloc_delete", 00:05:26.540 "bdev_malloc_create", 00:05:26.540 "bdev_null_resize", 00:05:26.540 "bdev_null_delete", 00:05:26.540 "bdev_null_create", 00:05:26.540 "bdev_nvme_cuse_unregister", 00:05:26.540 "bdev_nvme_cuse_register", 00:05:26.540 "bdev_opal_new_user", 00:05:26.540 "bdev_opal_set_lock_state", 00:05:26.540 "bdev_opal_delete", 00:05:26.540 "bdev_opal_get_info", 00:05:26.540 "bdev_opal_create", 00:05:26.540 "bdev_nvme_opal_revert", 00:05:26.540 "bdev_nvme_opal_init", 00:05:26.540 "bdev_nvme_send_cmd", 00:05:26.540 "bdev_nvme_set_keys", 00:05:26.540 "bdev_nvme_get_path_iostat", 00:05:26.540 "bdev_nvme_get_mdns_discovery_info", 00:05:26.540 "bdev_nvme_stop_mdns_discovery", 00:05:26.540 "bdev_nvme_start_mdns_discovery", 00:05:26.540 "bdev_nvme_set_multipath_policy", 00:05:26.540 "bdev_nvme_set_preferred_path", 00:05:26.540 "bdev_nvme_get_io_paths", 00:05:26.540 "bdev_nvme_remove_error_injection", 00:05:26.540 "bdev_nvme_add_error_injection", 00:05:26.540 "bdev_nvme_get_discovery_info", 00:05:26.540 "bdev_nvme_stop_discovery", 00:05:26.540 "bdev_nvme_start_discovery", 00:05:26.540 "bdev_nvme_get_controller_health_info", 00:05:26.540 "bdev_nvme_disable_controller", 00:05:26.540 "bdev_nvme_enable_controller", 00:05:26.540 "bdev_nvme_reset_controller", 00:05:26.540 "bdev_nvme_get_transport_statistics", 00:05:26.540 "bdev_nvme_apply_firmware", 00:05:26.540 "bdev_nvme_detach_controller", 00:05:26.540 "bdev_nvme_get_controllers", 00:05:26.540 "bdev_nvme_attach_controller", 00:05:26.540 "bdev_nvme_set_hotplug", 00:05:26.540 "bdev_nvme_set_options", 00:05:26.540 "bdev_passthru_delete", 00:05:26.540 "bdev_passthru_create", 00:05:26.540 "bdev_lvol_set_parent_bdev", 00:05:26.540 "bdev_lvol_set_parent", 00:05:26.540 "bdev_lvol_check_shallow_copy", 00:05:26.540 "bdev_lvol_start_shallow_copy", 00:05:26.540 "bdev_lvol_grow_lvstore", 00:05:26.540 "bdev_lvol_get_lvols", 00:05:26.540 "bdev_lvol_get_lvstores", 00:05:26.540 "bdev_lvol_delete", 00:05:26.540 "bdev_lvol_set_read_only", 00:05:26.540 "bdev_lvol_resize", 00:05:26.540 "bdev_lvol_decouple_parent", 00:05:26.540 "bdev_lvol_inflate", 00:05:26.540 "bdev_lvol_rename", 00:05:26.540 "bdev_lvol_clone_bdev", 00:05:26.540 "bdev_lvol_clone", 00:05:26.541 "bdev_lvol_snapshot", 00:05:26.541 "bdev_lvol_create", 00:05:26.541 "bdev_lvol_delete_lvstore", 00:05:26.541 "bdev_lvol_rename_lvstore", 00:05:26.541 "bdev_lvol_create_lvstore", 00:05:26.541 "bdev_raid_set_options", 00:05:26.541 "bdev_raid_remove_base_bdev", 00:05:26.541 "bdev_raid_add_base_bdev", 00:05:26.541 "bdev_raid_delete", 00:05:26.541 "bdev_raid_create", 00:05:26.541 "bdev_raid_get_bdevs", 00:05:26.541 "bdev_error_inject_error", 00:05:26.541 "bdev_error_delete", 00:05:26.541 "bdev_error_create", 00:05:26.541 "bdev_split_delete", 00:05:26.541 "bdev_split_create", 00:05:26.541 "bdev_delay_delete", 00:05:26.541 "bdev_delay_create", 00:05:26.541 "bdev_delay_update_latency", 00:05:26.541 "bdev_zone_block_delete", 00:05:26.541 "bdev_zone_block_create", 00:05:26.541 "blobfs_create", 00:05:26.541 "blobfs_detect", 00:05:26.541 "blobfs_set_cache_size", 00:05:26.541 "bdev_aio_delete", 00:05:26.541 "bdev_aio_rescan", 00:05:26.541 "bdev_aio_create", 00:05:26.541 "bdev_ftl_set_property", 00:05:26.541 "bdev_ftl_get_properties", 00:05:26.541 "bdev_ftl_get_stats", 00:05:26.541 "bdev_ftl_unmap", 00:05:26.541 "bdev_ftl_unload", 00:05:26.541 "bdev_ftl_delete", 00:05:26.541 "bdev_ftl_load", 00:05:26.541 "bdev_ftl_create", 00:05:26.541 "bdev_virtio_attach_controller", 00:05:26.541 "bdev_virtio_scsi_get_devices", 00:05:26.541 "bdev_virtio_detach_controller", 00:05:26.541 "bdev_virtio_blk_set_hotplug", 00:05:26.541 "bdev_iscsi_delete", 00:05:26.541 "bdev_iscsi_create", 00:05:26.541 "bdev_iscsi_set_options", 00:05:26.541 "bdev_uring_delete", 00:05:26.541 "bdev_uring_rescan", 00:05:26.541 "bdev_uring_create", 00:05:26.541 "accel_error_inject_error", 00:05:26.541 "ioat_scan_accel_module", 00:05:26.541 "dsa_scan_accel_module", 00:05:26.541 "iaa_scan_accel_module", 00:05:26.541 "vfu_virtio_create_fs_endpoint", 00:05:26.541 "vfu_virtio_create_scsi_endpoint", 00:05:26.541 "vfu_virtio_scsi_remove_target", 00:05:26.541 "vfu_virtio_scsi_add_target", 00:05:26.541 "vfu_virtio_create_blk_endpoint", 00:05:26.541 "vfu_virtio_delete_endpoint", 00:05:26.541 "keyring_file_remove_key", 00:05:26.541 "keyring_file_add_key", 00:05:26.541 "keyring_linux_set_options", 00:05:26.541 "fsdev_aio_delete", 00:05:26.541 "fsdev_aio_create", 00:05:26.541 "iscsi_get_histogram", 00:05:26.541 "iscsi_enable_histogram", 00:05:26.541 "iscsi_set_options", 00:05:26.541 "iscsi_get_auth_groups", 00:05:26.541 "iscsi_auth_group_remove_secret", 00:05:26.541 "iscsi_auth_group_add_secret", 00:05:26.541 "iscsi_delete_auth_group", 00:05:26.541 "iscsi_create_auth_group", 00:05:26.541 "iscsi_set_discovery_auth", 00:05:26.541 "iscsi_get_options", 00:05:26.541 "iscsi_target_node_request_logout", 00:05:26.541 "iscsi_target_node_set_redirect", 00:05:26.541 "iscsi_target_node_set_auth", 00:05:26.541 "iscsi_target_node_add_lun", 00:05:26.541 "iscsi_get_stats", 00:05:26.541 "iscsi_get_connections", 00:05:26.541 "iscsi_portal_group_set_auth", 00:05:26.541 "iscsi_start_portal_group", 00:05:26.541 "iscsi_delete_portal_group", 00:05:26.541 "iscsi_create_portal_group", 00:05:26.541 "iscsi_get_portal_groups", 00:05:26.541 "iscsi_delete_target_node", 00:05:26.541 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.541 "iscsi_target_node_add_pg_ig_maps", 00:05:26.541 "iscsi_create_target_node", 00:05:26.541 "iscsi_get_target_nodes", 00:05:26.541 "iscsi_delete_initiator_group", 00:05:26.541 "iscsi_initiator_group_remove_initiators", 00:05:26.541 "iscsi_initiator_group_add_initiators", 00:05:26.541 "iscsi_create_initiator_group", 00:05:26.541 "iscsi_get_initiator_groups", 00:05:26.541 "nvmf_set_crdt", 00:05:26.541 "nvmf_set_config", 00:05:26.541 "nvmf_set_max_subsystems", 00:05:26.541 "nvmf_stop_mdns_prr", 00:05:26.541 "nvmf_publish_mdns_prr", 00:05:26.541 "nvmf_subsystem_get_listeners", 00:05:26.541 "nvmf_subsystem_get_qpairs", 00:05:26.541 "nvmf_subsystem_get_controllers", 00:05:26.541 "nvmf_get_stats", 00:05:26.541 "nvmf_get_transports", 00:05:26.541 "nvmf_create_transport", 00:05:26.541 "nvmf_get_targets", 00:05:26.541 "nvmf_delete_target", 00:05:26.541 "nvmf_create_target", 00:05:26.541 "nvmf_subsystem_allow_any_host", 00:05:26.541 "nvmf_subsystem_set_keys", 00:05:26.541 "nvmf_subsystem_remove_host", 00:05:26.541 "nvmf_subsystem_add_host", 00:05:26.541 "nvmf_ns_remove_host", 00:05:26.541 "nvmf_ns_add_host", 00:05:26.541 "nvmf_subsystem_remove_ns", 00:05:26.541 "nvmf_subsystem_set_ns_ana_group", 00:05:26.541 "nvmf_subsystem_add_ns", 00:05:26.541 "nvmf_subsystem_listener_set_ana_state", 00:05:26.541 "nvmf_discovery_get_referrals", 00:05:26.541 "nvmf_discovery_remove_referral", 00:05:26.541 "nvmf_discovery_add_referral", 00:05:26.541 "nvmf_subsystem_remove_listener", 00:05:26.541 "nvmf_subsystem_add_listener", 00:05:26.541 "nvmf_delete_subsystem", 00:05:26.541 "nvmf_create_subsystem", 00:05:26.541 "nvmf_get_subsystems", 00:05:26.541 "env_dpdk_get_mem_stats", 00:05:26.541 "nbd_get_disks", 00:05:26.541 "nbd_stop_disk", 00:05:26.541 "nbd_start_disk", 00:05:26.541 "ublk_recover_disk", 00:05:26.541 "ublk_get_disks", 00:05:26.541 "ublk_stop_disk", 00:05:26.541 "ublk_start_disk", 00:05:26.541 "ublk_destroy_target", 00:05:26.541 "ublk_create_target", 00:05:26.541 "virtio_blk_create_transport", 00:05:26.541 "virtio_blk_get_transports", 00:05:26.541 "vhost_controller_set_coalescing", 00:05:26.541 "vhost_get_controllers", 00:05:26.541 "vhost_delete_controller", 00:05:26.541 "vhost_create_blk_controller", 00:05:26.541 "vhost_scsi_controller_remove_target", 00:05:26.541 "vhost_scsi_controller_add_target", 00:05:26.541 "vhost_start_scsi_controller", 00:05:26.541 "vhost_create_scsi_controller", 00:05:26.541 "thread_set_cpumask", 00:05:26.541 "scheduler_set_options", 00:05:26.541 "framework_get_governor", 00:05:26.541 "framework_get_scheduler", 00:05:26.541 "framework_set_scheduler", 00:05:26.541 "framework_get_reactors", 00:05:26.541 "thread_get_io_channels", 00:05:26.541 "thread_get_pollers", 00:05:26.541 "thread_get_stats", 00:05:26.541 "framework_monitor_context_switch", 00:05:26.541 "spdk_kill_instance", 00:05:26.541 "log_enable_timestamps", 00:05:26.541 "log_get_flags", 00:05:26.541 "log_clear_flag", 00:05:26.541 "log_set_flag", 00:05:26.541 "log_get_level", 00:05:26.541 "log_set_level", 00:05:26.541 "log_get_print_level", 00:05:26.541 "log_set_print_level", 00:05:26.541 "framework_enable_cpumask_locks", 00:05:26.541 "framework_disable_cpumask_locks", 00:05:26.541 "framework_wait_init", 00:05:26.541 "framework_start_init", 00:05:26.541 "scsi_get_devices", 00:05:26.541 "bdev_get_histogram", 00:05:26.541 "bdev_enable_histogram", 00:05:26.541 "bdev_set_qos_limit", 00:05:26.541 "bdev_set_qd_sampling_period", 00:05:26.541 "bdev_get_bdevs", 00:05:26.541 "bdev_reset_iostat", 00:05:26.541 "bdev_get_iostat", 00:05:26.541 "bdev_examine", 00:05:26.541 "bdev_wait_for_examine", 00:05:26.541 "bdev_set_options", 00:05:26.541 "accel_get_stats", 00:05:26.541 "accel_set_options", 00:05:26.541 "accel_set_driver", 00:05:26.541 "accel_crypto_key_destroy", 00:05:26.541 "accel_crypto_keys_get", 00:05:26.541 "accel_crypto_key_create", 00:05:26.541 "accel_assign_opc", 00:05:26.541 "accel_get_module_info", 00:05:26.541 "accel_get_opc_assignments", 00:05:26.541 "vmd_rescan", 00:05:26.541 "vmd_remove_device", 00:05:26.541 "vmd_enable", 00:05:26.541 "sock_get_default_impl", 00:05:26.541 "sock_set_default_impl", 00:05:26.541 "sock_impl_set_options", 00:05:26.541 "sock_impl_get_options", 00:05:26.541 "iobuf_get_stats", 00:05:26.541 "iobuf_set_options", 00:05:26.541 "keyring_get_keys", 00:05:26.541 "vfu_tgt_set_base_path", 00:05:26.541 "framework_get_pci_devices", 00:05:26.541 "framework_get_config", 00:05:26.541 "framework_get_subsystems", 00:05:26.541 "fsdev_set_opts", 00:05:26.541 "fsdev_get_opts", 00:05:26.541 "trace_get_info", 00:05:26.541 "trace_get_tpoint_group_mask", 00:05:26.541 "trace_disable_tpoint_group", 00:05:26.541 "trace_enable_tpoint_group", 00:05:26.541 "trace_clear_tpoint_mask", 00:05:26.541 "trace_set_tpoint_mask", 00:05:26.541 "notify_get_notifications", 00:05:26.541 "notify_get_types", 00:05:26.541 "spdk_get_version", 00:05:26.541 "rpc_get_methods" 00:05:26.541 ] 00:05:26.541 02:49:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.541 02:49:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.541 02:49:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58679 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58679 ']' 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58679 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58679 00:05:26.541 killing process with pid 58679 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58679' 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58679 00:05:26.541 02:49:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58679 00:05:28.448 ************************************ 00:05:28.448 END TEST spdkcli_tcp 00:05:28.448 ************************************ 00:05:28.448 00:05:28.448 real 0m3.408s 00:05:28.448 user 0m6.239s 00:05:28.448 sys 0m0.536s 00:05:28.448 02:49:59 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.448 02:49:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.448 02:49:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.448 02:49:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.448 02:49:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.448 02:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.448 ************************************ 00:05:28.448 START TEST dpdk_mem_utility 00:05:28.448 ************************************ 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.448 * Looking for test storage... 00:05:28.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.448 02:49:59 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.448 --rc genhtml_branch_coverage=1 00:05:28.448 --rc genhtml_function_coverage=1 00:05:28.448 --rc genhtml_legend=1 00:05:28.448 --rc geninfo_all_blocks=1 00:05:28.448 --rc geninfo_unexecuted_blocks=1 00:05:28.448 00:05:28.448 ' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.448 --rc genhtml_branch_coverage=1 00:05:28.448 --rc genhtml_function_coverage=1 00:05:28.448 --rc genhtml_legend=1 00:05:28.448 --rc geninfo_all_blocks=1 00:05:28.448 --rc geninfo_unexecuted_blocks=1 00:05:28.448 00:05:28.448 ' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.448 --rc genhtml_branch_coverage=1 00:05:28.448 --rc genhtml_function_coverage=1 00:05:28.448 --rc genhtml_legend=1 00:05:28.448 --rc geninfo_all_blocks=1 00:05:28.448 --rc geninfo_unexecuted_blocks=1 00:05:28.448 00:05:28.448 ' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.448 --rc genhtml_branch_coverage=1 00:05:28.448 --rc genhtml_function_coverage=1 00:05:28.448 --rc genhtml_legend=1 00:05:28.448 --rc geninfo_all_blocks=1 00:05:28.448 --rc geninfo_unexecuted_blocks=1 00:05:28.448 00:05:28.448 ' 00:05:28.448 02:49:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.448 02:49:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58790 00:05:28.448 02:49:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58790 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58790 ']' 00:05:28.448 02:49:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.448 02:49:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 [2024-12-05 02:49:59.339720] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:28.708 [2024-12-05 02:49:59.339912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58790 ] 00:05:28.708 [2024-12-05 02:49:59.517776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.968 [2024-12-05 02:49:59.604723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.968 [2024-12-05 02:49:59.801356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.538 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.538 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:29.538 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.538 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.538 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.538 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.538 { 00:05:29.538 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.538 } 00:05:29.538 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.538 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.538 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:29.538 1 heaps totaling size 824.000000 MiB 00:05:29.538 size: 824.000000 MiB heap id: 0 00:05:29.538 end heaps---------- 00:05:29.538 9 mempools totaling size 603.782043 MiB 00:05:29.538 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.538 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.538 size: 100.555481 MiB name: bdev_io_58790 00:05:29.538 size: 50.003479 MiB name: msgpool_58790 00:05:29.538 size: 36.509338 MiB name: fsdev_io_58790 00:05:29.538 size: 21.763794 MiB name: PDU_Pool 00:05:29.538 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.538 size: 4.133484 MiB name: evtpool_58790 00:05:29.538 size: 0.026123 MiB name: Session_Pool 00:05:29.538 end mempools------- 00:05:29.538 6 memzones totaling size 4.142822 MiB 00:05:29.538 size: 1.000366 MiB name: RG_ring_0_58790 00:05:29.538 size: 1.000366 MiB name: RG_ring_1_58790 00:05:29.538 size: 1.000366 MiB name: RG_ring_4_58790 00:05:29.538 size: 1.000366 MiB name: RG_ring_5_58790 00:05:29.538 size: 0.125366 MiB name: RG_ring_2_58790 00:05:29.538 size: 0.015991 MiB name: RG_ring_3_58790 00:05:29.538 end memzones------- 00:05:29.538 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.538 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:29.538 list of free elements. size: 16.779663 MiB 00:05:29.538 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:29.538 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:29.538 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:29.538 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:29.538 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:29.538 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:29.538 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:29.538 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:29.538 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:29.538 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:29.538 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:29.538 element at address: 0x20001b400000 with size: 0.560974 MiB 00:05:29.538 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:29.538 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:29.538 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:29.538 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:29.538 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:29.538 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:29.538 list of standard malloc elements. size: 199.289429 MiB 00:05:29.538 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:29.538 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:29.538 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:29.538 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:29.538 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:29.538 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:29.538 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:29.538 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:29.538 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:29.538 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:29.538 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:29.538 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:29.538 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:29.539 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:29.539 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:29.539 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:29.800 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:29.801 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:29.801 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:29.801 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:29.801 list of memzone associated elements. size: 607.930908 MiB 00:05:29.801 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:29.801 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.801 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:29.801 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.801 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:29.801 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58790_0 00:05:29.801 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:29.801 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58790_0 00:05:29.801 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:29.801 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58790_0 00:05:29.801 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:29.801 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.801 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:29.801 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.801 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:29.801 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58790_0 00:05:29.801 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:29.801 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58790 00:05:29.801 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:29.801 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58790 00:05:29.801 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:29.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.801 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:29.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.801 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:29.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.801 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:29.801 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.801 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:29.801 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58790 00:05:29.801 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:29.801 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58790 00:05:29.801 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:29.801 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58790 00:05:29.801 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:29.801 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58790 00:05:29.801 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:29.801 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58790 00:05:29.801 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:29.801 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58790 00:05:29.801 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:29.801 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.801 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:29.801 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.801 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:29.801 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.801 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:29.801 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58790 00:05:29.802 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:29.802 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58790 00:05:29.802 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:29.802 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.802 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:29.802 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.802 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:29.802 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58790 00:05:29.802 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:29.802 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.802 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:29.802 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58790 00:05:29.802 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:29.802 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58790 00:05:29.802 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:29.802 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58790 00:05:29.802 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:29.802 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.802 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.802 02:50:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58790 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58790 ']' 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58790 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58790 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.802 killing process with pid 58790 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58790' 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58790 00:05:29.802 02:50:00 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58790 00:05:31.707 00:05:31.707 real 0m3.099s 00:05:31.707 user 0m3.168s 00:05:31.707 sys 0m0.454s 00:05:31.707 ************************************ 00:05:31.707 END TEST dpdk_mem_utility 00:05:31.707 ************************************ 00:05:31.707 02:50:02 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.707 02:50:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.707 02:50:02 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.707 02:50:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.707 02:50:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.707 02:50:02 -- common/autotest_common.sh@10 -- # set +x 00:05:31.707 ************************************ 00:05:31.707 START TEST event 00:05:31.707 ************************************ 00:05:31.707 02:50:02 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.707 * Looking for test storage... 00:05:31.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.707 02:50:02 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.708 02:50:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.708 02:50:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.708 02:50:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.708 02:50:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.708 02:50:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.708 02:50:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.708 02:50:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.708 02:50:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.708 02:50:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.708 02:50:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.708 02:50:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.708 02:50:02 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.708 02:50:02 event -- scripts/common.sh@345 -- # : 1 00:05:31.708 02:50:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.708 02:50:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.708 02:50:02 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.708 02:50:02 event -- scripts/common.sh@353 -- # local d=1 00:05:31.708 02:50:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.708 02:50:02 event -- scripts/common.sh@355 -- # echo 1 00:05:31.708 02:50:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.708 02:50:02 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.708 02:50:02 event -- scripts/common.sh@353 -- # local d=2 00:05:31.708 02:50:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.708 02:50:02 event -- scripts/common.sh@355 -- # echo 2 00:05:31.708 02:50:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.708 02:50:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.708 02:50:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.708 02:50:02 event -- scripts/common.sh@368 -- # return 0 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.708 --rc genhtml_branch_coverage=1 00:05:31.708 --rc genhtml_function_coverage=1 00:05:31.708 --rc genhtml_legend=1 00:05:31.708 --rc geninfo_all_blocks=1 00:05:31.708 --rc geninfo_unexecuted_blocks=1 00:05:31.708 00:05:31.708 ' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.708 --rc genhtml_branch_coverage=1 00:05:31.708 --rc genhtml_function_coverage=1 00:05:31.708 --rc genhtml_legend=1 00:05:31.708 --rc geninfo_all_blocks=1 00:05:31.708 --rc geninfo_unexecuted_blocks=1 00:05:31.708 00:05:31.708 ' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.708 --rc genhtml_branch_coverage=1 00:05:31.708 --rc genhtml_function_coverage=1 00:05:31.708 --rc genhtml_legend=1 00:05:31.708 --rc geninfo_all_blocks=1 00:05:31.708 --rc geninfo_unexecuted_blocks=1 00:05:31.708 00:05:31.708 ' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.708 --rc genhtml_branch_coverage=1 00:05:31.708 --rc genhtml_function_coverage=1 00:05:31.708 --rc genhtml_legend=1 00:05:31.708 --rc geninfo_all_blocks=1 00:05:31.708 --rc geninfo_unexecuted_blocks=1 00:05:31.708 00:05:31.708 ' 00:05:31.708 02:50:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.708 02:50:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.708 02:50:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:31.708 02:50:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.708 02:50:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.708 ************************************ 00:05:31.708 START TEST event_perf 00:05:31.708 ************************************ 00:05:31.708 02:50:02 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.708 Running I/O for 1 seconds...[2024-12-05 02:50:02.435243] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:31.708 [2024-12-05 02:50:02.435401] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:05:31.967 [2024-12-05 02:50:02.613651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.967 [2024-12-05 02:50:02.699618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.967 [2024-12-05 02:50:02.699698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.967 Running I/O for 1 seconds...[2024-12-05 02:50:02.700650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.967 [2024-12-05 02:50:02.700654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.345 00:05:33.345 lcore 0: 196647 00:05:33.345 lcore 1: 196646 00:05:33.345 lcore 2: 196648 00:05:33.345 lcore 3: 196648 00:05:33.345 done. 00:05:33.345 00:05:33.345 real 0m1.527s 00:05:33.345 user 0m4.302s 00:05:33.345 sys 0m0.096s 00:05:33.345 02:50:03 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.345 02:50:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 ************************************ 00:05:33.345 END TEST event_perf 00:05:33.345 ************************************ 00:05:33.345 02:50:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.345 02:50:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:33.345 02:50:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.345 02:50:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 ************************************ 00:05:33.345 START TEST event_reactor 00:05:33.345 ************************************ 00:05:33.345 02:50:03 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.345 [2024-12-05 02:50:04.002217] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:33.345 [2024-12-05 02:50:04.002798] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:33.345 [2024-12-05 02:50:04.155912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.605 [2024-12-05 02:50:04.243499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.983 test_start 00:05:34.983 oneshot 00:05:34.983 tick 100 00:05:34.983 tick 100 00:05:34.983 tick 250 00:05:34.983 tick 100 00:05:34.983 tick 100 00:05:34.983 tick 100 00:05:34.983 tick 250 00:05:34.983 tick 500 00:05:34.983 tick 100 00:05:34.983 tick 100 00:05:34.983 tick 250 00:05:34.983 tick 100 00:05:34.983 tick 100 00:05:34.983 test_end 00:05:34.983 00:05:34.983 real 0m1.471s 00:05:34.983 user 0m1.297s 00:05:34.983 sys 0m0.066s 00:05:34.983 02:50:05 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.983 ************************************ 00:05:34.983 END TEST event_reactor 00:05:34.983 ************************************ 00:05:34.983 02:50:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.983 02:50:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.983 02:50:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.983 02:50:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.983 02:50:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.983 ************************************ 00:05:34.983 START TEST event_reactor_perf 00:05:34.983 ************************************ 00:05:34.983 02:50:05 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.983 [2024-12-05 02:50:05.536192] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:34.983 [2024-12-05 02:50:05.536368] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58963 ] 00:05:34.983 [2024-12-05 02:50:05.713512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.983 [2024-12-05 02:50:05.808097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.360 test_start 00:05:36.360 test_end 00:05:36.360 Performance: 332820 events per second 00:05:36.360 00:05:36.360 real 0m1.519s 00:05:36.360 user 0m1.325s 00:05:36.360 sys 0m0.084s 00:05:36.360 02:50:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.360 02:50:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.360 ************************************ 00:05:36.360 END TEST event_reactor_perf 00:05:36.360 ************************************ 00:05:36.360 02:50:07 event -- event/event.sh@49 -- # uname -s 00:05:36.360 02:50:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.360 02:50:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.360 02:50:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.360 02:50:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.360 02:50:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.360 ************************************ 00:05:36.360 START TEST event_scheduler 00:05:36.360 ************************************ 00:05:36.360 02:50:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.360 * Looking for test storage... 00:05:36.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:36.360 02:50:07 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.360 02:50:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.360 02:50:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.620 02:50:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.620 --rc genhtml_branch_coverage=1 00:05:36.620 --rc genhtml_function_coverage=1 00:05:36.620 --rc genhtml_legend=1 00:05:36.620 --rc geninfo_all_blocks=1 00:05:36.620 --rc geninfo_unexecuted_blocks=1 00:05:36.620 00:05:36.620 ' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.620 --rc genhtml_branch_coverage=1 00:05:36.620 --rc genhtml_function_coverage=1 00:05:36.620 --rc genhtml_legend=1 00:05:36.620 --rc geninfo_all_blocks=1 00:05:36.620 --rc geninfo_unexecuted_blocks=1 00:05:36.620 00:05:36.620 ' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.620 --rc genhtml_branch_coverage=1 00:05:36.620 --rc genhtml_function_coverage=1 00:05:36.620 --rc genhtml_legend=1 00:05:36.620 --rc geninfo_all_blocks=1 00:05:36.620 --rc geninfo_unexecuted_blocks=1 00:05:36.620 00:05:36.620 ' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.620 --rc genhtml_branch_coverage=1 00:05:36.620 --rc genhtml_function_coverage=1 00:05:36.620 --rc genhtml_legend=1 00:05:36.620 --rc geninfo_all_blocks=1 00:05:36.620 --rc geninfo_unexecuted_blocks=1 00:05:36.620 00:05:36.620 ' 00:05:36.620 02:50:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.620 02:50:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59039 00:05:36.620 02:50:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.620 02:50:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.620 02:50:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59039 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59039 ']' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.620 02:50:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.620 [2024-12-05 02:50:07.361106] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:36.620 [2024-12-05 02:50:07.361298] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:05:36.879 [2024-12-05 02:50:07.537878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.879 [2024-12-05 02:50:07.627884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.879 [2024-12-05 02:50:07.627969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.879 [2024-12-05 02:50:07.628137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.879 [2024-12-05 02:50:07.628333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.815 02:50:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.815 02:50:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:37.815 02:50:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.815 02:50:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.815 02:50:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.815 POWER: Cannot set governor of lcore 0 to performance 00:05:37.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.815 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:37.815 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:37.815 POWER: Unable to set Power Management Environment for lcore 0 00:05:37.815 [2024-12-05 02:50:08.386348] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:37.816 [2024-12-05 02:50:08.386384] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:37.816 [2024-12-05 02:50:08.386407] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.816 [2024-12-05 02:50:08.386443] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.816 [2024-12-05 02:50:08.386462] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.816 [2024-12-05 02:50:08.386481] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.816 02:50:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.816 [2024-12-05 02:50:08.540161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.816 [2024-12-05 02:50:08.623356] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.816 02:50:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.816 02:50:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.816 ************************************ 00:05:37.816 START TEST scheduler_create_thread 00:05:37.816 ************************************ 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.816 2 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.816 3 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.816 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 4 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 5 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 6 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 7 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 8 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 9 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 10 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.075 02:50:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.013 02:50:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.013 02:50:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.013 02:50:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.013 02:50:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.013 02:50:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.950 02:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.950 00:05:39.950 real 0m2.135s 00:05:39.950 user 0m0.020s 00:05:39.950 sys 0m0.008s 00:05:39.950 02:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.950 ************************************ 00:05:39.950 END TEST scheduler_create_thread 00:05:39.950 02:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.950 ************************************ 00:05:40.209 02:50:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.209 02:50:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59039 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59039 ']' 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59039 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59039 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.209 killing process with pid 59039 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59039' 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59039 00:05:40.209 02:50:10 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59039 00:05:40.469 [2024-12-05 02:50:11.251567] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.403 00:05:41.403 real 0m5.085s 00:05:41.403 user 0m9.171s 00:05:41.403 sys 0m0.441s 00:05:41.403 02:50:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.403 02:50:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.403 ************************************ 00:05:41.403 END TEST event_scheduler 00:05:41.403 ************************************ 00:05:41.403 02:50:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.403 02:50:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.403 02:50:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.403 02:50:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.403 02:50:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.403 ************************************ 00:05:41.403 START TEST app_repeat 00:05:41.403 ************************************ 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59144 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.403 Process app_repeat pid: 59144 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59144' 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.403 spdk_app_start Round 0 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.403 02:50:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59144 /var/tmp/spdk-nbd.sock 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.403 02:50:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.662 [2024-12-05 02:50:12.278480] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:41.662 [2024-12-05 02:50:12.278698] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59144 ] 00:05:41.662 [2024-12-05 02:50:12.459149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.922 [2024-12-05 02:50:12.551272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.922 [2024-12-05 02:50:12.551287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.922 [2024-12-05 02:50:12.710984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.489 02:50:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.489 02:50:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.489 02:50:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.760 Malloc0 00:05:42.760 02:50:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.033 Malloc1 00:05:43.293 02:50:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.293 02:50:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.293 /dev/nbd0 00:05:43.552 02:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.552 02:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.552 1+0 records in 00:05:43.552 1+0 records out 00:05:43.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230088 s, 17.8 MB/s 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.552 02:50:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.552 02:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.552 02:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.552 02:50:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.811 /dev/nbd1 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.811 1+0 records in 00:05:43.811 1+0 records out 00:05:43.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279766 s, 14.6 MB/s 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.811 02:50:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.811 02:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.071 { 00:05:44.071 "nbd_device": "/dev/nbd0", 00:05:44.071 "bdev_name": "Malloc0" 00:05:44.071 }, 00:05:44.071 { 00:05:44.071 "nbd_device": "/dev/nbd1", 00:05:44.071 "bdev_name": "Malloc1" 00:05:44.071 } 00:05:44.071 ]' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.071 { 00:05:44.071 "nbd_device": "/dev/nbd0", 00:05:44.071 "bdev_name": "Malloc0" 00:05:44.071 }, 00:05:44.071 { 00:05:44.071 "nbd_device": "/dev/nbd1", 00:05:44.071 "bdev_name": "Malloc1" 00:05:44.071 } 00:05:44.071 ]' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.071 /dev/nbd1' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.071 /dev/nbd1' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.071 256+0 records in 00:05:44.071 256+0 records out 00:05:44.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101244 s, 104 MB/s 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.071 256+0 records in 00:05:44.071 256+0 records out 00:05:44.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024869 s, 42.2 MB/s 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.071 256+0 records in 00:05:44.071 256+0 records out 00:05:44.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279795 s, 37.5 MB/s 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.071 02:50:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.331 02:50:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.589 02:50:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.590 02:50:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.590 02:50:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.590 02:50:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.848 02:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.107 02:50:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.107 02:50:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.675 02:50:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.612 [2024-12-05 02:50:17.109008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.612 [2024-12-05 02:50:17.194242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.612 [2024-12-05 02:50:17.194250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.612 [2024-12-05 02:50:17.352355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.612 [2024-12-05 02:50:17.352528] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.612 [2024-12-05 02:50:17.352556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.519 02:50:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.519 02:50:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.519 spdk_app_start Round 1 00:05:48.519 02:50:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59144 /var/tmp/spdk-nbd.sock 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.519 02:50:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.779 02:50:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.779 02:50:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.779 02:50:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.038 Malloc0 00:05:49.297 02:50:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.557 Malloc1 00:05:49.557 02:50:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.557 02:50:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.816 /dev/nbd0 00:05:49.816 02:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.816 02:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.816 1+0 records in 00:05:49.816 1+0 records out 00:05:49.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438666 s, 9.3 MB/s 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.816 02:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.816 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.816 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.816 02:50:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.076 /dev/nbd1 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.076 1+0 records in 00:05:50.076 1+0 records out 00:05:50.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313617 s, 13.1 MB/s 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.076 02:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.076 02:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.336 { 00:05:50.336 "nbd_device": "/dev/nbd0", 00:05:50.336 "bdev_name": "Malloc0" 00:05:50.336 }, 00:05:50.336 { 00:05:50.336 "nbd_device": "/dev/nbd1", 00:05:50.336 "bdev_name": "Malloc1" 00:05:50.336 } 00:05:50.336 ]' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.336 { 00:05:50.336 "nbd_device": "/dev/nbd0", 00:05:50.336 "bdev_name": "Malloc0" 00:05:50.336 }, 00:05:50.336 { 00:05:50.336 "nbd_device": "/dev/nbd1", 00:05:50.336 "bdev_name": "Malloc1" 00:05:50.336 } 00:05:50.336 ]' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.336 /dev/nbd1' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.336 /dev/nbd1' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.336 256+0 records in 00:05:50.336 256+0 records out 00:05:50.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496282 s, 211 MB/s 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.336 02:50:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.596 256+0 records in 00:05:50.596 256+0 records out 00:05:50.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330438 s, 31.7 MB/s 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.596 256+0 records in 00:05:50.596 256+0 records out 00:05:50.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304027 s, 34.5 MB/s 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.596 02:50:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.856 02:50:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.116 02:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.375 02:50:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.375 02:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.375 02:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.636 02:50:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.636 02:50:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.895 02:50:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.833 [2024-12-05 02:50:23.518226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.833 [2024-12-05 02:50:23.600788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.833 [2024-12-05 02:50:23.600792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.093 [2024-12-05 02:50:23.757633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.093 [2024-12-05 02:50:23.757799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.093 [2024-12-05 02:50:23.757820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.995 02:50:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.995 spdk_app_start Round 2 00:05:54.995 02:50:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.995 02:50:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59144 /var/tmp/spdk-nbd.sock 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.995 02:50:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.254 02:50:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.254 02:50:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.254 02:50:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.513 Malloc0 00:05:55.513 02:50:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.772 Malloc1 00:05:55.772 02:50:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.772 02:50:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.031 /dev/nbd0 00:05:56.032 02:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.032 02:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.032 1+0 records in 00:05:56.032 1+0 records out 00:05:56.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149689 s, 2.7 MB/s 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.032 02:50:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.032 02:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.032 02:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.032 02:50:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.600 /dev/nbd1 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.600 1+0 records in 00:05:56.600 1+0 records out 00:05:56.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472356 s, 8.7 MB/s 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.600 02:50:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.600 02:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.860 { 00:05:56.860 "nbd_device": "/dev/nbd0", 00:05:56.860 "bdev_name": "Malloc0" 00:05:56.860 }, 00:05:56.860 { 00:05:56.860 "nbd_device": "/dev/nbd1", 00:05:56.860 "bdev_name": "Malloc1" 00:05:56.860 } 00:05:56.860 ]' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.860 { 00:05:56.860 "nbd_device": "/dev/nbd0", 00:05:56.860 "bdev_name": "Malloc0" 00:05:56.860 }, 00:05:56.860 { 00:05:56.860 "nbd_device": "/dev/nbd1", 00:05:56.860 "bdev_name": "Malloc1" 00:05:56.860 } 00:05:56.860 ]' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.860 /dev/nbd1' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.860 /dev/nbd1' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.860 256+0 records in 00:05:56.860 256+0 records out 00:05:56.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109272 s, 96.0 MB/s 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.860 256+0 records in 00:05:56.860 256+0 records out 00:05:56.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287113 s, 36.5 MB/s 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.860 256+0 records in 00:05:56.860 256+0 records out 00:05:56.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029016 s, 36.1 MB/s 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.860 02:50:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.120 02:50:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.378 02:50:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.378 02:50:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.379 02:50:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.947 02:50:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.947 02:50:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.207 02:50:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.144 [2024-12-05 02:50:29.892496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.404 [2024-12-05 02:50:29.991226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.404 [2024-12-05 02:50:29.991226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.404 [2024-12-05 02:50:30.138256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.404 [2024-12-05 02:50:30.138403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.404 [2024-12-05 02:50:30.138430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.335 02:50:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59144 /var/tmp/spdk-nbd.sock 00:06:01.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.335 02:50:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.594 02:50:32 event.app_repeat -- event/event.sh@39 -- # killprocess 59144 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59144 ']' 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59144 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59144 00:06:01.594 killing process with pid 59144 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59144' 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59144 00:06:01.594 02:50:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59144 00:06:02.532 spdk_app_start is called in Round 0. 00:06:02.532 Shutdown signal received, stop current app iteration 00:06:02.532 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:02.532 spdk_app_start is called in Round 1. 00:06:02.532 Shutdown signal received, stop current app iteration 00:06:02.532 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:02.532 spdk_app_start is called in Round 2. 00:06:02.532 Shutdown signal received, stop current app iteration 00:06:02.532 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:02.532 spdk_app_start is called in Round 3. 00:06:02.532 Shutdown signal received, stop current app iteration 00:06:02.532 02:50:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.532 02:50:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.532 00:06:02.532 real 0m21.008s 00:06:02.532 user 0m47.019s 00:06:02.532 sys 0m2.659s 00:06:02.532 02:50:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.532 02:50:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.532 ************************************ 00:06:02.532 END TEST app_repeat 00:06:02.532 ************************************ 00:06:02.532 02:50:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.532 02:50:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.532 02:50:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.532 02:50:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.532 02:50:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.532 ************************************ 00:06:02.532 START TEST cpu_locks 00:06:02.532 ************************************ 00:06:02.532 02:50:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.532 * Looking for test storage... 00:06:02.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.532 02:50:33 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.532 02:50:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.532 02:50:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.791 02:50:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.791 --rc genhtml_branch_coverage=1 00:06:02.791 --rc genhtml_function_coverage=1 00:06:02.791 --rc genhtml_legend=1 00:06:02.791 --rc geninfo_all_blocks=1 00:06:02.791 --rc geninfo_unexecuted_blocks=1 00:06:02.791 00:06:02.791 ' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.791 --rc genhtml_branch_coverage=1 00:06:02.791 --rc genhtml_function_coverage=1 00:06:02.791 --rc genhtml_legend=1 00:06:02.791 --rc geninfo_all_blocks=1 00:06:02.791 --rc geninfo_unexecuted_blocks=1 00:06:02.791 00:06:02.791 ' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.791 --rc genhtml_branch_coverage=1 00:06:02.791 --rc genhtml_function_coverage=1 00:06:02.791 --rc genhtml_legend=1 00:06:02.791 --rc geninfo_all_blocks=1 00:06:02.791 --rc geninfo_unexecuted_blocks=1 00:06:02.791 00:06:02.791 ' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.791 --rc genhtml_branch_coverage=1 00:06:02.791 --rc genhtml_function_coverage=1 00:06:02.791 --rc genhtml_legend=1 00:06:02.791 --rc geninfo_all_blocks=1 00:06:02.791 --rc geninfo_unexecuted_blocks=1 00:06:02.791 00:06:02.791 ' 00:06:02.791 02:50:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.791 02:50:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.791 02:50:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.791 02:50:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.791 02:50:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.791 ************************************ 00:06:02.791 START TEST default_locks 00:06:02.791 ************************************ 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59603 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59603 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59603 ']' 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.791 02:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.791 [2024-12-05 02:50:33.588430] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:02.791 [2024-12-05 02:50:33.588870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59603 ] 00:06:03.050 [2024-12-05 02:50:33.770235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.050 [2024-12-05 02:50:33.866009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.309 [2024-12-05 02:50:34.059897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.875 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.875 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:03.875 02:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59603 00:06:03.875 02:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59603 00:06:03.875 02:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59603 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59603 ']' 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59603 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59603 00:06:04.134 killing process with pid 59603 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59603' 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59603 00:06:04.134 02:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59603 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59603 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59603 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.041 ERROR: process (pid: 59603) is no longer running 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59603 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59603 ']' 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.041 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59603) - No such process 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.041 00:06:06.041 real 0m3.309s 00:06:06.041 user 0m3.364s 00:06:06.041 sys 0m0.601s 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.041 02:50:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.041 ************************************ 00:06:06.041 END TEST default_locks 00:06:06.041 ************************************ 00:06:06.041 02:50:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.041 02:50:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.041 02:50:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.041 02:50:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.041 ************************************ 00:06:06.041 START TEST default_locks_via_rpc 00:06:06.041 ************************************ 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59673 00:06:06.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59673 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59673 ']' 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.041 02:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.301 [2024-12-05 02:50:36.918994] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:06.301 [2024-12-05 02:50:36.919392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59673 ] 00:06:06.301 [2024-12-05 02:50:37.081171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.560 [2024-12-05 02:50:37.170417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.560 [2024-12-05 02:50:37.379821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59673 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59673 00:06:07.129 02:50:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.697 02:50:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59673 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59673 ']' 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59673 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59673 00:06:07.698 killing process with pid 59673 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59673' 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59673 00:06:07.698 02:50:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59673 00:06:09.604 ************************************ 00:06:09.604 END TEST default_locks_via_rpc 00:06:09.604 ************************************ 00:06:09.604 00:06:09.604 real 0m3.480s 00:06:09.604 user 0m3.626s 00:06:09.604 sys 0m0.613s 00:06:09.604 02:50:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.604 02:50:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.604 02:50:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.604 02:50:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.604 02:50:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.604 02:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.604 ************************************ 00:06:09.604 START TEST non_locking_app_on_locked_coremask 00:06:09.604 ************************************ 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59736 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59736 /var/tmp/spdk.sock 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.604 02:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.863 [2024-12-05 02:50:40.465481] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:09.863 [2024-12-05 02:50:40.465642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:06:09.863 [2024-12-05 02:50:40.627837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.123 [2024-12-05 02:50:40.724157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.123 [2024-12-05 02:50:40.917102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59752 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59752 /var/tmp/spdk2.sock 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59752 ']' 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.691 02:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.950 [2024-12-05 02:50:41.542305] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:10.950 [2024-12-05 02:50:41.542730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:06:10.950 [2024-12-05 02:50:41.729294] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.950 [2024-12-05 02:50:41.729355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.209 [2024-12-05 02:50:41.893003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.468 [2024-12-05 02:50:42.283134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.405 02:50:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.405 02:50:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.405 02:50:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59736 00:06:12.405 02:50:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59736 00:06:12.405 02:50:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59736 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59736 ']' 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59736 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59736 00:06:13.342 killing process with pid 59736 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59736' 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59736 00:06:13.342 02:50:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59736 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59752 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59752 ']' 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59752 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.656 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59752 00:06:16.920 killing process with pid 59752 00:06:16.920 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.920 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.920 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59752' 00:06:16.920 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59752 00:06:16.920 02:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59752 00:06:18.821 ************************************ 00:06:18.821 END TEST non_locking_app_on_locked_coremask 00:06:18.821 ************************************ 00:06:18.821 00:06:18.821 real 0m8.855s 00:06:18.821 user 0m9.330s 00:06:18.821 sys 0m1.199s 00:06:18.821 02:50:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.821 02:50:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 02:50:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.821 02:50:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.821 02:50:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.821 02:50:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 ************************************ 00:06:18.821 START TEST locking_app_on_unlocked_coremask 00:06:18.821 ************************************ 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59878 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59878 /var/tmp/spdk.sock 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59878 ']' 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.821 02:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 [2024-12-05 02:50:49.367537] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:18.821 [2024-12-05 02:50:49.367722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59878 ] 00:06:18.821 [2024-12-05 02:50:49.532872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.821 [2024-12-05 02:50:49.532939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.821 [2024-12-05 02:50:49.618729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.080 [2024-12-05 02:50:49.810566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59894 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59894 /var/tmp/spdk2.sock 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59894 ']' 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.649 02:50:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.649 [2024-12-05 02:50:50.413081] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:19.649 [2024-12-05 02:50:50.413254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59894 ] 00:06:19.908 [2024-12-05 02:50:50.593411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.168 [2024-12-05 02:50:50.753061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.427 [2024-12-05 02:50:51.147738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.363 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.363 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.363 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59894 00:06:21.363 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59894 00:06:21.363 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59878 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59878 ']' 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59878 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59878 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.298 killing process with pid 59878 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59878' 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59878 00:06:22.298 02:50:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59878 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59894 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59894 ']' 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59894 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59894 00:06:25.588 killing process with pid 59894 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59894' 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59894 00:06:25.588 02:50:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59894 00:06:27.494 ************************************ 00:06:27.494 END TEST locking_app_on_unlocked_coremask 00:06:27.494 ************************************ 00:06:27.494 00:06:27.494 real 0m8.826s 00:06:27.494 user 0m9.396s 00:06:27.494 sys 0m1.155s 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 02:50:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.494 02:50:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.494 02:50:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.494 02:50:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 ************************************ 00:06:27.494 START TEST locking_app_on_locked_coremask 00:06:27.494 ************************************ 00:06:27.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60016 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60016 /var/tmp/spdk.sock 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60016 ']' 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.494 02:50:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 [2024-12-05 02:50:58.269721] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:27.494 [2024-12-05 02:50:58.269954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:06:27.753 [2024-12-05 02:50:58.445685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.753 [2024-12-05 02:50:58.524485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.012 [2024-12-05 02:50:58.709466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60032 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60032 /var/tmp/spdk2.sock 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60032 /var/tmp/spdk2.sock 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60032 /var/tmp/spdk2.sock 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60032 ']' 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.580 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.581 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.581 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.581 [2024-12-05 02:50:59.302965] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:28.581 [2024-12-05 02:50:59.303363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:06:28.840 [2024-12-05 02:50:59.492528] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60016 has claimed it. 00:06:28.840 [2024-12-05 02:50:59.492604] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.407 ERROR: process (pid: 60032) is no longer running 00:06:29.407 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60032) - No such process 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60016 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60016 00:06:29.407 02:50:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60016 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60016 ']' 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60016 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60016 00:06:29.664 killing process with pid 60016 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60016' 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60016 00:06:29.664 02:51:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60016 00:06:31.572 00:06:31.572 real 0m4.084s 00:06:31.572 user 0m4.500s 00:06:31.572 sys 0m0.751s 00:06:31.572 02:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.572 02:51:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.572 ************************************ 00:06:31.572 END TEST locking_app_on_locked_coremask 00:06:31.572 ************************************ 00:06:31.572 02:51:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.572 02:51:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.572 02:51:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.572 02:51:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.572 ************************************ 00:06:31.572 START TEST locking_overlapped_coremask 00:06:31.572 ************************************ 00:06:31.572 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:31.572 02:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60096 00:06:31.572 02:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60096 /var/tmp/spdk.sock 00:06:31.572 02:51:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.572 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60096 ']' 00:06:31.573 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.573 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.573 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.573 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.573 02:51:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.573 [2024-12-05 02:51:02.410224] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:31.573 [2024-12-05 02:51:02.410398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:06:31.830 [2024-12-05 02:51:02.599593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.087 [2024-12-05 02:51:02.728865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.087 [2024-12-05 02:51:02.728986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.087 [2024-12-05 02:51:02.728999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.344 [2024-12-05 02:51:02.951824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60118 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60118 /var/tmp/spdk2.sock 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60118 /var/tmp/spdk2.sock 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60118 /var/tmp/spdk2.sock 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60118 ']' 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.910 02:51:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.910 [2024-12-05 02:51:03.575452] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:32.910 [2024-12-05 02:51:03.575620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60118 ] 00:06:33.168 [2024-12-05 02:51:03.775905] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60096 has claimed it. 00:06:33.168 [2024-12-05 02:51:03.776010] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.425 ERROR: process (pid: 60118) is no longer running 00:06:33.425 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60118) - No such process 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60096 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60096 ']' 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60096 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.425 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60096 00:06:33.683 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.683 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.683 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60096' 00:06:33.683 killing process with pid 60096 00:06:33.683 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60096 00:06:33.683 02:51:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60096 00:06:35.585 00:06:35.585 real 0m3.835s 00:06:35.585 user 0m10.476s 00:06:35.585 sys 0m0.556s 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.585 ************************************ 00:06:35.585 END TEST locking_overlapped_coremask 00:06:35.585 ************************************ 00:06:35.585 02:51:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:35.585 02:51:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.585 02:51:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.585 02:51:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.585 ************************************ 00:06:35.585 START TEST locking_overlapped_coremask_via_rpc 00:06:35.585 ************************************ 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:35.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60172 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60172 /var/tmp/spdk.sock 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60172 ']' 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.585 02:51:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.585 [2024-12-05 02:51:06.271934] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:35.585 [2024-12-05 02:51:06.272075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60172 ] 00:06:35.845 [2024-12-05 02:51:06.433777] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.845 [2024-12-05 02:51:06.433842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.845 [2024-12-05 02:51:06.517353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.845 [2024-12-05 02:51:06.517490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.845 [2024-12-05 02:51:06.517493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.104 [2024-12-05 02:51:06.725654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60190 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60190 /var/tmp/spdk2.sock 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60190 ']' 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.673 02:51:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.673 [2024-12-05 02:51:07.343210] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:36.673 [2024-12-05 02:51:07.343863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 00:06:36.932 [2024-12-05 02:51:07.535408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.932 [2024-12-05 02:51:07.535476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.932 [2024-12-05 02:51:07.731254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.932 [2024-12-05 02:51:07.734904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.932 [2024-12-05 02:51:07.734910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.502 [2024-12-05 02:51:08.150611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.441 [2024-12-05 02:51:09.151960] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60172 has claimed it. 00:06:38.441 request: 00:06:38.441 { 00:06:38.441 "method": "framework_enable_cpumask_locks", 00:06:38.441 "req_id": 1 00:06:38.441 } 00:06:38.441 Got JSON-RPC error response 00:06:38.441 response: 00:06:38.441 { 00:06:38.441 "code": -32603, 00:06:38.441 "message": "Failed to claim CPU core: 2" 00:06:38.441 } 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60172 /var/tmp/spdk.sock 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60172 ']' 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.441 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60190 /var/tmp/spdk2.sock 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60190 ']' 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.701 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.960 ************************************ 00:06:38.960 END TEST locking_overlapped_coremask_via_rpc 00:06:38.960 ************************************ 00:06:38.960 00:06:38.960 real 0m3.617s 00:06:38.960 user 0m1.484s 00:06:38.960 sys 0m0.163s 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.960 02:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.220 02:51:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.220 02:51:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60172 ]] 00:06:39.220 02:51:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60172 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60172 ']' 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60172 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60172 00:06:39.220 killing process with pid 60172 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60172' 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60172 00:06:39.220 02:51:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60172 00:06:41.126 02:51:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60190 ]] 00:06:41.126 02:51:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60190 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60190 ']' 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60190 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60190 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.126 killing process with pid 60190 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60190' 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60190 00:06:41.126 02:51:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60190 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.030 Process with pid 60172 is not found 00:06:43.030 Process with pid 60190 is not found 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60172 ]] 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60172 00:06:43.030 02:51:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60172 ']' 00:06:43.030 02:51:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60172 00:06:43.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60172) - No such process 00:06:43.030 02:51:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60172 is not found' 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60190 ]] 00:06:43.030 02:51:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60190 00:06:43.030 02:51:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60190 ']' 00:06:43.031 02:51:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60190 00:06:43.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60190) - No such process 00:06:43.031 02:51:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60190 is not found' 00:06:43.031 02:51:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.031 00:06:43.031 real 0m40.399s 00:06:43.031 user 1m10.897s 00:06:43.031 sys 0m5.990s 00:06:43.031 02:51:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.031 02:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 ************************************ 00:06:43.031 END TEST cpu_locks 00:06:43.031 ************************************ 00:06:43.031 00:06:43.031 real 1m11.513s 00:06:43.031 user 2m14.211s 00:06:43.031 sys 0m9.613s 00:06:43.031 02:51:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.031 02:51:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 ************************************ 00:06:43.031 END TEST event 00:06:43.031 ************************************ 00:06:43.031 02:51:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.031 02:51:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.031 02:51:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.031 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 ************************************ 00:06:43.031 START TEST thread 00:06:43.031 ************************************ 00:06:43.031 02:51:13 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.031 * Looking for test storage... 00:06:43.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.031 02:51:13 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.031 02:51:13 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.031 02:51:13 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.290 02:51:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.290 02:51:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.290 02:51:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.290 02:51:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.290 02:51:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.290 02:51:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.290 02:51:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.290 02:51:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.290 02:51:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.290 02:51:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.290 02:51:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.290 02:51:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:43.290 02:51:13 thread -- scripts/common.sh@345 -- # : 1 00:06:43.290 02:51:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.290 02:51:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.290 02:51:13 thread -- scripts/common.sh@365 -- # decimal 1 00:06:43.290 02:51:13 thread -- scripts/common.sh@353 -- # local d=1 00:06:43.290 02:51:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.290 02:51:13 thread -- scripts/common.sh@355 -- # echo 1 00:06:43.290 02:51:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.290 02:51:13 thread -- scripts/common.sh@366 -- # decimal 2 00:06:43.290 02:51:13 thread -- scripts/common.sh@353 -- # local d=2 00:06:43.290 02:51:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.290 02:51:13 thread -- scripts/common.sh@355 -- # echo 2 00:06:43.290 02:51:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.290 02:51:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.290 02:51:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.290 02:51:13 thread -- scripts/common.sh@368 -- # return 0 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.290 --rc genhtml_branch_coverage=1 00:06:43.290 --rc genhtml_function_coverage=1 00:06:43.290 --rc genhtml_legend=1 00:06:43.290 --rc geninfo_all_blocks=1 00:06:43.290 --rc geninfo_unexecuted_blocks=1 00:06:43.290 00:06:43.290 ' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.290 --rc genhtml_branch_coverage=1 00:06:43.290 --rc genhtml_function_coverage=1 00:06:43.290 --rc genhtml_legend=1 00:06:43.290 --rc geninfo_all_blocks=1 00:06:43.290 --rc geninfo_unexecuted_blocks=1 00:06:43.290 00:06:43.290 ' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.290 --rc genhtml_branch_coverage=1 00:06:43.290 --rc genhtml_function_coverage=1 00:06:43.290 --rc genhtml_legend=1 00:06:43.290 --rc geninfo_all_blocks=1 00:06:43.290 --rc geninfo_unexecuted_blocks=1 00:06:43.290 00:06:43.290 ' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.290 --rc genhtml_branch_coverage=1 00:06:43.290 --rc genhtml_function_coverage=1 00:06:43.290 --rc genhtml_legend=1 00:06:43.290 --rc geninfo_all_blocks=1 00:06:43.290 --rc geninfo_unexecuted_blocks=1 00:06:43.290 00:06:43.290 ' 00:06:43.290 02:51:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.290 02:51:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 ************************************ 00:06:43.290 START TEST thread_poller_perf 00:06:43.290 ************************************ 00:06:43.290 02:51:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.290 [2024-12-05 02:51:13.997261] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:43.290 [2024-12-05 02:51:13.997628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:06:43.550 [2024-12-05 02:51:14.185204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.550 [2024-12-05 02:51:14.307954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.550 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.925 [2024-12-05T02:51:15.769Z] ====================================== 00:06:44.925 [2024-12-05T02:51:15.769Z] busy:2215591428 (cyc) 00:06:44.925 [2024-12-05T02:51:15.769Z] total_run_count: 348000 00:06:44.925 [2024-12-05T02:51:15.769Z] tsc_hz: 2200000000 (cyc) 00:06:44.925 [2024-12-05T02:51:15.769Z] ====================================== 00:06:44.925 [2024-12-05T02:51:15.769Z] poller_cost: 6366 (cyc), 2893 (nsec) 00:06:44.925 00:06:44.925 real 0m1.552s 00:06:44.925 user 0m1.345s 00:06:44.925 sys 0m0.097s 00:06:44.925 02:51:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.925 02:51:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 ************************************ 00:06:44.925 END TEST thread_poller_perf 00:06:44.925 ************************************ 00:06:44.926 02:51:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.926 02:51:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:44.926 02:51:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.926 02:51:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.926 ************************************ 00:06:44.926 START TEST thread_poller_perf 00:06:44.926 ************************************ 00:06:44.926 02:51:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.926 [2024-12-05 02:51:15.605652] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:44.926 [2024-12-05 02:51:15.605836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60410 ] 00:06:45.183 [2024-12-05 02:51:15.782579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.183 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.183 [2024-12-05 02:51:15.872869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.583 [2024-12-05T02:51:17.427Z] ====================================== 00:06:46.583 [2024-12-05T02:51:17.427Z] busy:2203609970 (cyc) 00:06:46.583 [2024-12-05T02:51:17.427Z] total_run_count: 4450000 00:06:46.583 [2024-12-05T02:51:17.427Z] tsc_hz: 2200000000 (cyc) 00:06:46.583 [2024-12-05T02:51:17.427Z] ====================================== 00:06:46.583 [2024-12-05T02:51:17.427Z] poller_cost: 495 (cyc), 225 (nsec) 00:06:46.583 ************************************ 00:06:46.583 END TEST thread_poller_perf 00:06:46.583 ************************************ 00:06:46.583 00:06:46.583 real 0m1.516s 00:06:46.583 user 0m1.324s 00:06:46.583 sys 0m0.083s 00:06:46.583 02:51:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.583 02:51:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.583 02:51:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.583 ************************************ 00:06:46.583 END TEST thread 00:06:46.583 ************************************ 00:06:46.583 00:06:46.583 real 0m3.353s 00:06:46.583 user 0m2.805s 00:06:46.583 sys 0m0.313s 00:06:46.583 02:51:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.583 02:51:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.583 02:51:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:46.583 02:51:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.583 02:51:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.583 02:51:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.583 02:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.583 ************************************ 00:06:46.583 START TEST app_cmdline 00:06:46.584 ************************************ 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.584 * Looking for test storage... 00:06:46.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.584 02:51:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.584 --rc genhtml_branch_coverage=1 00:06:46.584 --rc genhtml_function_coverage=1 00:06:46.584 --rc genhtml_legend=1 00:06:46.584 --rc geninfo_all_blocks=1 00:06:46.584 --rc geninfo_unexecuted_blocks=1 00:06:46.584 00:06:46.584 ' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.584 --rc genhtml_branch_coverage=1 00:06:46.584 --rc genhtml_function_coverage=1 00:06:46.584 --rc genhtml_legend=1 00:06:46.584 --rc geninfo_all_blocks=1 00:06:46.584 --rc geninfo_unexecuted_blocks=1 00:06:46.584 00:06:46.584 ' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.584 --rc genhtml_branch_coverage=1 00:06:46.584 --rc genhtml_function_coverage=1 00:06:46.584 --rc genhtml_legend=1 00:06:46.584 --rc geninfo_all_blocks=1 00:06:46.584 --rc geninfo_unexecuted_blocks=1 00:06:46.584 00:06:46.584 ' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.584 --rc genhtml_branch_coverage=1 00:06:46.584 --rc genhtml_function_coverage=1 00:06:46.584 --rc genhtml_legend=1 00:06:46.584 --rc geninfo_all_blocks=1 00:06:46.584 --rc geninfo_unexecuted_blocks=1 00:06:46.584 00:06:46.584 ' 00:06:46.584 02:51:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.584 02:51:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60493 00:06:46.584 02:51:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60493 00:06:46.584 02:51:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60493 ']' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.584 02:51:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.870 [2024-12-05 02:51:17.497350] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:46.870 [2024-12-05 02:51:17.498211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60493 ] 00:06:46.870 [2024-12-05 02:51:17.676850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.140 [2024-12-05 02:51:17.764208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.140 [2024-12-05 02:51:17.946326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.709 02:51:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.709 02:51:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:47.709 02:51:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:47.969 { 00:06:47.969 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:06:47.969 "fields": { 00:06:47.969 "major": 25, 00:06:47.969 "minor": 1, 00:06:47.969 "patch": 0, 00:06:47.969 "suffix": "-pre", 00:06:47.969 "commit": "8d3947977" 00:06:47.969 } 00:06:47.969 } 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:47.969 02:51:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:47.969 02:51:18 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.228 request: 00:06:48.228 { 00:06:48.228 "method": "env_dpdk_get_mem_stats", 00:06:48.228 "req_id": 1 00:06:48.228 } 00:06:48.228 Got JSON-RPC error response 00:06:48.228 response: 00:06:48.228 { 00:06:48.228 "code": -32601, 00:06:48.228 "message": "Method not found" 00:06:48.228 } 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.228 02:51:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60493 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60493 ']' 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60493 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.228 02:51:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60493 00:06:48.488 killing process with pid 60493 00:06:48.488 02:51:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.488 02:51:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.488 02:51:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60493' 00:06:48.488 02:51:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 60493 00:06:48.488 02:51:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 60493 00:06:50.395 00:06:50.395 real 0m3.848s 00:06:50.395 user 0m4.388s 00:06:50.395 sys 0m0.533s 00:06:50.395 02:51:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.395 ************************************ 00:06:50.395 END TEST app_cmdline 00:06:50.395 ************************************ 00:06:50.395 02:51:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.395 02:51:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.395 02:51:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.395 02:51:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.395 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.395 ************************************ 00:06:50.395 START TEST version 00:06:50.395 ************************************ 00:06:50.395 02:51:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.395 * Looking for test storage... 00:06:50.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:50.395 02:51:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:50.395 02:51:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.395 02:51:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.656 02:51:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.656 02:51:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.656 02:51:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.656 02:51:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.656 02:51:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.656 02:51:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.656 02:51:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.656 02:51:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.656 02:51:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.656 02:51:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.656 02:51:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.656 02:51:21 version -- scripts/common.sh@344 -- # case "$op" in 00:06:50.656 02:51:21 version -- scripts/common.sh@345 -- # : 1 00:06:50.656 02:51:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.656 02:51:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.656 02:51:21 version -- scripts/common.sh@365 -- # decimal 1 00:06:50.656 02:51:21 version -- scripts/common.sh@353 -- # local d=1 00:06:50.656 02:51:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.656 02:51:21 version -- scripts/common.sh@355 -- # echo 1 00:06:50.656 02:51:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.656 02:51:21 version -- scripts/common.sh@366 -- # decimal 2 00:06:50.656 02:51:21 version -- scripts/common.sh@353 -- # local d=2 00:06:50.656 02:51:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.656 02:51:21 version -- scripts/common.sh@355 -- # echo 2 00:06:50.656 02:51:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.656 02:51:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.656 02:51:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.656 02:51:21 version -- scripts/common.sh@368 -- # return 0 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.656 --rc genhtml_branch_coverage=1 00:06:50.656 --rc genhtml_function_coverage=1 00:06:50.656 --rc genhtml_legend=1 00:06:50.656 --rc geninfo_all_blocks=1 00:06:50.656 --rc geninfo_unexecuted_blocks=1 00:06:50.656 00:06:50.656 ' 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.656 --rc genhtml_branch_coverage=1 00:06:50.656 --rc genhtml_function_coverage=1 00:06:50.656 --rc genhtml_legend=1 00:06:50.656 --rc geninfo_all_blocks=1 00:06:50.656 --rc geninfo_unexecuted_blocks=1 00:06:50.656 00:06:50.656 ' 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.656 --rc genhtml_branch_coverage=1 00:06:50.656 --rc genhtml_function_coverage=1 00:06:50.656 --rc genhtml_legend=1 00:06:50.656 --rc geninfo_all_blocks=1 00:06:50.656 --rc geninfo_unexecuted_blocks=1 00:06:50.656 00:06:50.656 ' 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.656 --rc genhtml_branch_coverage=1 00:06:50.656 --rc genhtml_function_coverage=1 00:06:50.656 --rc genhtml_legend=1 00:06:50.656 --rc geninfo_all_blocks=1 00:06:50.656 --rc geninfo_unexecuted_blocks=1 00:06:50.656 00:06:50.656 ' 00:06:50.656 02:51:21 version -- app/version.sh@17 -- # get_header_version major 00:06:50.656 02:51:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # cut -f2 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.656 02:51:21 version -- app/version.sh@17 -- # major=25 00:06:50.656 02:51:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:50.656 02:51:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # cut -f2 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.656 02:51:21 version -- app/version.sh@18 -- # minor=1 00:06:50.656 02:51:21 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # cut -f2 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.656 02:51:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.656 02:51:21 version -- app/version.sh@19 -- # patch=0 00:06:50.656 02:51:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.656 02:51:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # cut -f2 00:06:50.656 02:51:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.656 02:51:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.656 02:51:21 version -- app/version.sh@22 -- # version=25.1 00:06:50.656 02:51:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.656 02:51:21 version -- app/version.sh@28 -- # version=25.1rc0 00:06:50.656 02:51:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:50.656 02:51:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.656 02:51:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:50.656 02:51:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:50.656 00:06:50.656 real 0m0.259s 00:06:50.656 user 0m0.151s 00:06:50.656 sys 0m0.143s 00:06:50.656 ************************************ 00:06:50.656 END TEST version 00:06:50.656 ************************************ 00:06:50.656 02:51:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.656 02:51:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.656 02:51:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:50.656 02:51:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:50.656 02:51:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:50.656 02:51:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:50.656 02:51:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:50.656 02:51:21 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:50.656 02:51:21 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:50.656 02:51:21 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:50.656 02:51:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.656 02:51:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.656 02:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.656 ************************************ 00:06:50.656 START TEST spdk_dd 00:06:50.656 ************************************ 00:06:50.656 02:51:21 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:50.656 * Looking for test storage... 00:06:50.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:50.656 02:51:21 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:50.656 02:51:21 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.656 02:51:21 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:50.915 02:51:21 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.915 02:51:21 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:50.916 02:51:21 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.916 02:51:21 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.916 --rc genhtml_branch_coverage=1 00:06:50.916 --rc genhtml_function_coverage=1 00:06:50.916 --rc genhtml_legend=1 00:06:50.916 --rc geninfo_all_blocks=1 00:06:50.916 --rc geninfo_unexecuted_blocks=1 00:06:50.916 00:06:50.916 ' 00:06:50.916 02:51:21 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.916 --rc genhtml_branch_coverage=1 00:06:50.916 --rc genhtml_function_coverage=1 00:06:50.916 --rc genhtml_legend=1 00:06:50.916 --rc geninfo_all_blocks=1 00:06:50.916 --rc geninfo_unexecuted_blocks=1 00:06:50.916 00:06:50.916 ' 00:06:50.916 02:51:21 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.916 --rc genhtml_branch_coverage=1 00:06:50.916 --rc genhtml_function_coverage=1 00:06:50.916 --rc genhtml_legend=1 00:06:50.916 --rc geninfo_all_blocks=1 00:06:50.916 --rc geninfo_unexecuted_blocks=1 00:06:50.916 00:06:50.916 ' 00:06:50.916 02:51:21 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.916 --rc genhtml_branch_coverage=1 00:06:50.916 --rc genhtml_function_coverage=1 00:06:50.916 --rc genhtml_legend=1 00:06:50.916 --rc geninfo_all_blocks=1 00:06:50.916 --rc geninfo_unexecuted_blocks=1 00:06:50.916 00:06:50.916 ' 00:06:50.916 02:51:21 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.916 02:51:21 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.916 02:51:21 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.916 02:51:21 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.916 02:51:21 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.916 02:51:21 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:50.916 02:51:21 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.916 02:51:21 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:51.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.175 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:51.175 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:51.175 02:51:21 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:51.175 02:51:21 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:51.175 02:51:21 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:51.175 02:51:21 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:51.175 02:51:21 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:51.175 02:51:21 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:51.175 02:51:21 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:21 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.175 02:51:21 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:51.175 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:51.464 * spdk_dd linked to liburing 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:51.464 02:51:22 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:51.464 02:51:22 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:51.465 02:51:22 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:51.465 02:51:22 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:51.465 02:51:22 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:51.465 02:51:22 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:51.465 02:51:22 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:51.465 02:51:22 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:51.465 02:51:22 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:51.465 02:51:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.465 02:51:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.465 02:51:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.465 ************************************ 00:06:51.465 START TEST spdk_dd_basic_rw 00:06:51.465 ************************************ 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:51.465 * Looking for test storage... 00:06:51.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.465 --rc genhtml_branch_coverage=1 00:06:51.465 --rc genhtml_function_coverage=1 00:06:51.465 --rc genhtml_legend=1 00:06:51.465 --rc geninfo_all_blocks=1 00:06:51.465 --rc geninfo_unexecuted_blocks=1 00:06:51.465 00:06:51.465 ' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.465 --rc genhtml_branch_coverage=1 00:06:51.465 --rc genhtml_function_coverage=1 00:06:51.465 --rc genhtml_legend=1 00:06:51.465 --rc geninfo_all_blocks=1 00:06:51.465 --rc geninfo_unexecuted_blocks=1 00:06:51.465 00:06:51.465 ' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.465 --rc genhtml_branch_coverage=1 00:06:51.465 --rc genhtml_function_coverage=1 00:06:51.465 --rc genhtml_legend=1 00:06:51.465 --rc geninfo_all_blocks=1 00:06:51.465 --rc geninfo_unexecuted_blocks=1 00:06:51.465 00:06:51.465 ' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.465 --rc genhtml_branch_coverage=1 00:06:51.465 --rc genhtml_function_coverage=1 00:06:51.465 --rc genhtml_legend=1 00:06:51.465 --rc geninfo_all_blocks=1 00:06:51.465 --rc geninfo_unexecuted_blocks=1 00:06:51.465 00:06:51.465 ' 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:51.465 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:51.728 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:51.728 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.729 02:51:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.987 ************************************ 00:06:51.987 START TEST dd_bs_lt_native_bs 00:06:51.987 ************************************ 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.987 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.988 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.988 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.988 02:51:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:51.988 { 00:06:51.988 "subsystems": [ 00:06:51.988 { 00:06:51.988 "subsystem": "bdev", 00:06:51.988 "config": [ 00:06:51.988 { 00:06:51.988 "params": { 00:06:51.988 "trtype": "pcie", 00:06:51.988 "traddr": "0000:00:10.0", 00:06:51.988 "name": "Nvme0" 00:06:51.988 }, 00:06:51.988 "method": "bdev_nvme_attach_controller" 00:06:51.988 }, 00:06:51.988 { 00:06:51.988 "method": "bdev_wait_for_examine" 00:06:51.988 } 00:06:51.988 ] 00:06:51.988 } 00:06:51.988 ] 00:06:51.988 } 00:06:51.988 [2024-12-05 02:51:22.692385] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:51.988 [2024-12-05 02:51:22.692557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60863 ] 00:06:52.247 [2024-12-05 02:51:22.878303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.247 [2024-12-05 02:51:23.001967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.507 [2024-12-05 02:51:23.203810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.766 [2024-12-05 02:51:23.391777] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:52.766 [2024-12-05 02:51:23.391915] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.334 [2024-12-05 02:51:23.910225] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:53.334 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:53.334 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.334 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:53.335 ************************************ 00:06:53.335 END TEST dd_bs_lt_native_bs 00:06:53.335 ************************************ 00:06:53.335 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:53.335 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:53.335 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.335 00:06:53.335 real 0m1.562s 00:06:53.335 user 0m1.281s 00:06:53.335 sys 0m0.230s 00:06:53.335 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.335 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:53.593 02:51:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:53.593 02:51:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.593 02:51:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.593 02:51:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.593 ************************************ 00:06:53.593 START TEST dd_rw 00:06:53.593 ************************************ 00:06:53.593 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:53.594 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.162 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:54.162 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:54.162 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.162 02:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.162 { 00:06:54.162 "subsystems": [ 00:06:54.162 { 00:06:54.162 "subsystem": "bdev", 00:06:54.162 "config": [ 00:06:54.162 { 00:06:54.162 "params": { 00:06:54.162 "trtype": "pcie", 00:06:54.162 "traddr": "0000:00:10.0", 00:06:54.162 "name": "Nvme0" 00:06:54.162 }, 00:06:54.162 "method": "bdev_nvme_attach_controller" 00:06:54.162 }, 00:06:54.162 { 00:06:54.162 "method": "bdev_wait_for_examine" 00:06:54.162 } 00:06:54.162 ] 00:06:54.162 } 00:06:54.162 ] 00:06:54.162 } 00:06:54.162 [2024-12-05 02:51:24.906215] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:54.162 [2024-12-05 02:51:24.906388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60912 ] 00:06:54.421 [2024-12-05 02:51:25.085728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.421 [2024-12-05 02:51:25.175013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.679 [2024-12-05 02:51:25.320569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.679  [2024-12-05T02:51:26.458Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:55.615 00:06:55.615 02:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:55.615 02:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:55.615 02:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.615 02:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.874 { 00:06:55.874 "subsystems": [ 00:06:55.874 { 00:06:55.874 "subsystem": "bdev", 00:06:55.874 "config": [ 00:06:55.874 { 00:06:55.874 "params": { 00:06:55.874 "trtype": "pcie", 00:06:55.874 "traddr": "0000:00:10.0", 00:06:55.874 "name": "Nvme0" 00:06:55.874 }, 00:06:55.874 "method": "bdev_nvme_attach_controller" 00:06:55.874 }, 00:06:55.874 { 00:06:55.874 "method": "bdev_wait_for_examine" 00:06:55.874 } 00:06:55.874 ] 00:06:55.874 } 00:06:55.874 ] 00:06:55.874 } 00:06:55.874 [2024-12-05 02:51:26.529813] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:55.874 [2024-12-05 02:51:26.530227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60932 ] 00:06:55.874 [2024-12-05 02:51:26.709213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.134 [2024-12-05 02:51:26.804364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.134 [2024-12-05 02:51:26.957366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.394  [2024-12-05T02:51:28.175Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:57.331 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.331 02:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.331 { 00:06:57.331 "subsystems": [ 00:06:57.331 { 00:06:57.331 "subsystem": "bdev", 00:06:57.331 "config": [ 00:06:57.331 { 00:06:57.331 "params": { 00:06:57.331 "trtype": "pcie", 00:06:57.331 "traddr": "0000:00:10.0", 00:06:57.331 "name": "Nvme0" 00:06:57.331 }, 00:06:57.331 "method": "bdev_nvme_attach_controller" 00:06:57.331 }, 00:06:57.331 { 00:06:57.331 "method": "bdev_wait_for_examine" 00:06:57.331 } 00:06:57.331 ] 00:06:57.331 } 00:06:57.331 ] 00:06:57.331 } 00:06:57.331 [2024-12-05 02:51:27.949566] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:57.331 [2024-12-05 02:51:27.950001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60965 ] 00:06:57.331 [2024-12-05 02:51:28.128466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.591 [2024-12-05 02:51:28.215578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.591 [2024-12-05 02:51:28.360596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.850  [2024-12-05T02:51:29.630Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:58.786 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:58.786 02:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.352 02:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:59.352 02:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:59.352 02:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.352 02:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.352 { 00:06:59.352 "subsystems": [ 00:06:59.352 { 00:06:59.352 "subsystem": "bdev", 00:06:59.352 "config": [ 00:06:59.352 { 00:06:59.352 "params": { 00:06:59.352 "trtype": "pcie", 00:06:59.352 "traddr": "0000:00:10.0", 00:06:59.352 "name": "Nvme0" 00:06:59.352 }, 00:06:59.352 "method": "bdev_nvme_attach_controller" 00:06:59.352 }, 00:06:59.352 { 00:06:59.352 "method": "bdev_wait_for_examine" 00:06:59.352 } 00:06:59.352 ] 00:06:59.352 } 00:06:59.352 ] 00:06:59.352 } 00:06:59.611 [2024-12-05 02:51:30.198081] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:59.611 [2024-12-05 02:51:30.198613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:06:59.611 [2024-12-05 02:51:30.390086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.869 [2024-12-05 02:51:30.475094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.869 [2024-12-05 02:51:30.627506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.128  [2024-12-05T02:51:31.540Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:00.696 00:07:00.696 02:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:00.696 02:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:00.696 02:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.696 02:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.696 { 00:07:00.696 "subsystems": [ 00:07:00.696 { 00:07:00.696 "subsystem": "bdev", 00:07:00.696 "config": [ 00:07:00.696 { 00:07:00.696 "params": { 00:07:00.696 "trtype": "pcie", 00:07:00.696 "traddr": "0000:00:10.0", 00:07:00.696 "name": "Nvme0" 00:07:00.696 }, 00:07:00.696 "method": "bdev_nvme_attach_controller" 00:07:00.696 }, 00:07:00.696 { 00:07:00.696 "method": "bdev_wait_for_examine" 00:07:00.696 } 00:07:00.696 ] 00:07:00.696 } 00:07:00.696 ] 00:07:00.696 } 00:07:00.956 [2024-12-05 02:51:31.560561] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:00.956 [2024-12-05 02:51:31.560709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61016 ] 00:07:00.956 [2024-12-05 02:51:31.731211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.216 [2024-12-05 02:51:31.837868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.216 [2024-12-05 02:51:32.021303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.475  [2024-12-05T02:51:33.257Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:02.413 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.413 02:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.413 { 00:07:02.413 "subsystems": [ 00:07:02.413 { 00:07:02.413 "subsystem": "bdev", 00:07:02.413 "config": [ 00:07:02.413 { 00:07:02.413 "params": { 00:07:02.413 "trtype": "pcie", 00:07:02.413 "traddr": "0000:00:10.0", 00:07:02.413 "name": "Nvme0" 00:07:02.413 }, 00:07:02.413 "method": "bdev_nvme_attach_controller" 00:07:02.413 }, 00:07:02.413 { 00:07:02.413 "method": "bdev_wait_for_examine" 00:07:02.413 } 00:07:02.413 ] 00:07:02.413 } 00:07:02.413 ] 00:07:02.413 } 00:07:02.413 [2024-12-05 02:51:33.183070] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:02.413 [2024-12-05 02:51:33.183251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 00:07:02.672 [2024-12-05 02:51:33.365798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.672 [2024-12-05 02:51:33.450042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.932 [2024-12-05 02:51:33.595221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.932  [2024-12-05T02:51:34.722Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.878 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:03.878 02:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.458 02:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:04.458 02:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.458 02:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.458 02:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.458 { 00:07:04.458 "subsystems": [ 00:07:04.458 { 00:07:04.458 "subsystem": "bdev", 00:07:04.458 "config": [ 00:07:04.458 { 00:07:04.458 "params": { 00:07:04.458 "trtype": "pcie", 00:07:04.458 "traddr": "0000:00:10.0", 00:07:04.458 "name": "Nvme0" 00:07:04.458 }, 00:07:04.458 "method": "bdev_nvme_attach_controller" 00:07:04.458 }, 00:07:04.458 { 00:07:04.458 "method": "bdev_wait_for_examine" 00:07:04.458 } 00:07:04.458 ] 00:07:04.458 } 00:07:04.458 ] 00:07:04.458 } 00:07:04.458 [2024-12-05 02:51:35.128333] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:04.458 [2024-12-05 02:51:35.128673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61080 ] 00:07:04.458 [2024-12-05 02:51:35.294739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.716 [2024-12-05 02:51:35.385989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.974 [2024-12-05 02:51:35.571753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.974  [2024-12-05T02:51:37.196Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:06.352 00:07:06.352 02:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:06.352 02:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:06.352 02:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.352 02:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.352 { 00:07:06.352 "subsystems": [ 00:07:06.352 { 00:07:06.352 "subsystem": "bdev", 00:07:06.352 "config": [ 00:07:06.352 { 00:07:06.352 "params": { 00:07:06.352 "trtype": "pcie", 00:07:06.352 "traddr": "0000:00:10.0", 00:07:06.352 "name": "Nvme0" 00:07:06.352 }, 00:07:06.352 "method": "bdev_nvme_attach_controller" 00:07:06.352 }, 00:07:06.352 { 00:07:06.352 "method": "bdev_wait_for_examine" 00:07:06.352 } 00:07:06.352 ] 00:07:06.352 } 00:07:06.352 ] 00:07:06.352 } 00:07:06.352 [2024-12-05 02:51:36.922950] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:06.352 [2024-12-05 02:51:36.923172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61100 ] 00:07:06.352 [2024-12-05 02:51:37.097284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.612 [2024-12-05 02:51:37.206037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.612 [2024-12-05 02:51:37.402544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.871  [2024-12-05T02:51:38.654Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:07.810 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.810 02:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.810 { 00:07:07.810 "subsystems": [ 00:07:07.810 { 00:07:07.810 "subsystem": "bdev", 00:07:07.810 "config": [ 00:07:07.810 { 00:07:07.810 "params": { 00:07:07.810 "trtype": "pcie", 00:07:07.810 "traddr": "0000:00:10.0", 00:07:07.810 "name": "Nvme0" 00:07:07.810 }, 00:07:07.810 "method": "bdev_nvme_attach_controller" 00:07:07.810 }, 00:07:07.810 { 00:07:07.810 "method": "bdev_wait_for_examine" 00:07:07.810 } 00:07:07.810 ] 00:07:07.810 } 00:07:07.810 ] 00:07:07.810 } 00:07:07.810 [2024-12-05 02:51:38.544542] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:07.810 [2024-12-05 02:51:38.544712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61136 ] 00:07:08.069 [2024-12-05 02:51:38.726102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.070 [2024-12-05 02:51:38.832432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.329 [2024-12-05 02:51:39.021541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.587  [2024-12-05T02:51:40.366Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:09.522 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:09.522 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.089 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:10.089 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:10.089 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.089 02:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.089 { 00:07:10.089 "subsystems": [ 00:07:10.089 { 00:07:10.089 "subsystem": "bdev", 00:07:10.089 "config": [ 00:07:10.089 { 00:07:10.089 "params": { 00:07:10.089 "trtype": "pcie", 00:07:10.089 "traddr": "0000:00:10.0", 00:07:10.089 "name": "Nvme0" 00:07:10.089 }, 00:07:10.089 "method": "bdev_nvme_attach_controller" 00:07:10.089 }, 00:07:10.089 { 00:07:10.089 "method": "bdev_wait_for_examine" 00:07:10.089 } 00:07:10.089 ] 00:07:10.089 } 00:07:10.089 ] 00:07:10.089 } 00:07:10.348 [2024-12-05 02:51:40.972275] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:10.348 [2024-12-05 02:51:40.972447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:07:10.348 [2024-12-05 02:51:41.153370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.607 [2024-12-05 02:51:41.254718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.607 [2024-12-05 02:51:41.443075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.866  [2024-12-05T02:51:42.645Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:11.801 00:07:11.801 02:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:11.801 02:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:11.801 02:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.801 02:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.801 { 00:07:11.801 "subsystems": [ 00:07:11.801 { 00:07:11.801 "subsystem": "bdev", 00:07:11.801 "config": [ 00:07:11.801 { 00:07:11.801 "params": { 00:07:11.801 "trtype": "pcie", 00:07:11.801 "traddr": "0000:00:10.0", 00:07:11.801 "name": "Nvme0" 00:07:11.801 }, 00:07:11.801 "method": "bdev_nvme_attach_controller" 00:07:11.801 }, 00:07:11.801 { 00:07:11.801 "method": "bdev_wait_for_examine" 00:07:11.801 } 00:07:11.801 ] 00:07:11.801 } 00:07:11.801 ] 00:07:11.801 } 00:07:11.801 [2024-12-05 02:51:42.573276] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:11.801 [2024-12-05 02:51:42.573436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:07:12.061 [2024-12-05 02:51:42.749443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.061 [2024-12-05 02:51:42.854212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.319 [2024-12-05 02:51:43.036176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.578  [2024-12-05T02:51:44.356Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:13.512 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.512 02:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.512 { 00:07:13.512 "subsystems": [ 00:07:13.512 { 00:07:13.512 "subsystem": "bdev", 00:07:13.512 "config": [ 00:07:13.512 { 00:07:13.512 "params": { 00:07:13.512 "trtype": "pcie", 00:07:13.512 "traddr": "0000:00:10.0", 00:07:13.512 "name": "Nvme0" 00:07:13.512 }, 00:07:13.512 "method": "bdev_nvme_attach_controller" 00:07:13.512 }, 00:07:13.512 { 00:07:13.512 "method": "bdev_wait_for_examine" 00:07:13.512 } 00:07:13.512 ] 00:07:13.512 } 00:07:13.512 ] 00:07:13.512 } 00:07:13.770 [2024-12-05 02:51:44.398602] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:13.770 [2024-12-05 02:51:44.398845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61228 ] 00:07:13.770 [2024-12-05 02:51:44.588873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.027 [2024-12-05 02:51:44.694704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.285 [2024-12-05 02:51:44.877612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.285  [2024-12-05T02:51:46.063Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.219 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.219 02:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.785 02:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:15.785 02:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.785 02:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.785 02:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.785 { 00:07:15.785 "subsystems": [ 00:07:15.785 { 00:07:15.785 "subsystem": "bdev", 00:07:15.785 "config": [ 00:07:15.785 { 00:07:15.785 "params": { 00:07:15.785 "trtype": "pcie", 00:07:15.785 "traddr": "0000:00:10.0", 00:07:15.785 "name": "Nvme0" 00:07:15.785 }, 00:07:15.785 "method": "bdev_nvme_attach_controller" 00:07:15.785 }, 00:07:15.785 { 00:07:15.785 "method": "bdev_wait_for_examine" 00:07:15.785 } 00:07:15.785 ] 00:07:15.785 } 00:07:15.785 ] 00:07:15.785 } 00:07:15.785 [2024-12-05 02:51:46.491683] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:15.785 [2024-12-05 02:51:46.492063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:07:16.043 [2024-12-05 02:51:46.672456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.043 [2024-12-05 02:51:46.768827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.304 [2024-12-05 02:51:46.925390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.304  [2024-12-05T02:51:48.083Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:17.239 00:07:17.239 02:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.239 02:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:17.239 02:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.239 02:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.239 { 00:07:17.239 "subsystems": [ 00:07:17.239 { 00:07:17.239 "subsystem": "bdev", 00:07:17.239 "config": [ 00:07:17.239 { 00:07:17.239 "params": { 00:07:17.239 "trtype": "pcie", 00:07:17.239 "traddr": "0000:00:10.0", 00:07:17.239 "name": "Nvme0" 00:07:17.239 }, 00:07:17.239 "method": "bdev_nvme_attach_controller" 00:07:17.239 }, 00:07:17.239 { 00:07:17.239 "method": "bdev_wait_for_examine" 00:07:17.239 } 00:07:17.239 ] 00:07:17.239 } 00:07:17.239 ] 00:07:17.239 } 00:07:17.508 [2024-12-05 02:51:48.082389] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:17.508 [2024-12-05 02:51:48.082603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:07:17.508 [2024-12-05 02:51:48.259997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.779 [2024-12-05 02:51:48.343544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.779 [2024-12-05 02:51:48.493894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.037  [2024-12-05T02:51:49.448Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:18.604 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.604 02:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.604 { 00:07:18.604 "subsystems": [ 00:07:18.604 { 00:07:18.604 "subsystem": "bdev", 00:07:18.604 "config": [ 00:07:18.604 { 00:07:18.604 "params": { 00:07:18.604 "trtype": "pcie", 00:07:18.604 "traddr": "0000:00:10.0", 00:07:18.604 "name": "Nvme0" 00:07:18.605 }, 00:07:18.605 "method": "bdev_nvme_attach_controller" 00:07:18.605 }, 00:07:18.605 { 00:07:18.605 "method": "bdev_wait_for_examine" 00:07:18.605 } 00:07:18.605 ] 00:07:18.605 } 00:07:18.605 ] 00:07:18.605 } 00:07:18.863 [2024-12-05 02:51:49.447210] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:18.863 [2024-12-05 02:51:49.447661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:07:18.863 [2024-12-05 02:51:49.618492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.122 [2024-12-05 02:51:49.707389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.122 [2024-12-05 02:51:49.859416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.381  [2024-12-05T02:51:51.162Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.318 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:20.318 02:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 02:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:20.578 02:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:20.578 02:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.578 02:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 { 00:07:20.578 "subsystems": [ 00:07:20.578 { 00:07:20.578 "subsystem": "bdev", 00:07:20.578 "config": [ 00:07:20.578 { 00:07:20.578 "params": { 00:07:20.578 "trtype": "pcie", 00:07:20.578 "traddr": "0000:00:10.0", 00:07:20.578 "name": "Nvme0" 00:07:20.578 }, 00:07:20.578 "method": "bdev_nvme_attach_controller" 00:07:20.578 }, 00:07:20.578 { 00:07:20.578 "method": "bdev_wait_for_examine" 00:07:20.578 } 00:07:20.578 ] 00:07:20.578 } 00:07:20.578 ] 00:07:20.578 } 00:07:20.837 [2024-12-05 02:51:51.442514] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:20.837 [2024-12-05 02:51:51.443045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61343 ] 00:07:20.837 [2024-12-05 02:51:51.621110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.097 [2024-12-05 02:51:51.704294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.097 [2024-12-05 02:51:51.858179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.356  [2024-12-05T02:51:52.767Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:21.923 00:07:21.923 02:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:21.923 02:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:21.923 02:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.923 02:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.923 { 00:07:21.923 "subsystems": [ 00:07:21.923 { 00:07:21.923 "subsystem": "bdev", 00:07:21.923 "config": [ 00:07:21.923 { 00:07:21.923 "params": { 00:07:21.923 "trtype": "pcie", 00:07:21.923 "traddr": "0000:00:10.0", 00:07:21.923 "name": "Nvme0" 00:07:21.923 }, 00:07:21.923 "method": "bdev_nvme_attach_controller" 00:07:21.923 }, 00:07:21.923 { 00:07:21.923 "method": "bdev_wait_for_examine" 00:07:21.923 } 00:07:21.923 ] 00:07:21.923 } 00:07:21.923 ] 00:07:21.923 } 00:07:22.182 [2024-12-05 02:51:52.811287] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:22.182 [2024-12-05 02:51:52.811453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61363 ] 00:07:22.182 [2024-12-05 02:51:52.997568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.441 [2024-12-05 02:51:53.128818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.701 [2024-12-05 02:51:53.300395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.701  [2024-12-05T02:51:54.483Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:23.639 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.639 02:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.639 { 00:07:23.639 "subsystems": [ 00:07:23.639 { 00:07:23.639 "subsystem": "bdev", 00:07:23.639 "config": [ 00:07:23.639 { 00:07:23.639 "params": { 00:07:23.639 "trtype": "pcie", 00:07:23.639 "traddr": "0000:00:10.0", 00:07:23.640 "name": "Nvme0" 00:07:23.640 }, 00:07:23.640 "method": "bdev_nvme_attach_controller" 00:07:23.640 }, 00:07:23.640 { 00:07:23.640 "method": "bdev_wait_for_examine" 00:07:23.640 } 00:07:23.640 ] 00:07:23.640 } 00:07:23.640 ] 00:07:23.640 } 00:07:23.640 [2024-12-05 02:51:54.469958] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:23.640 [2024-12-05 02:51:54.470113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61396 ] 00:07:23.899 [2024-12-05 02:51:54.637401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.899 [2024-12-05 02:51:54.726277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.158 [2024-12-05 02:51:54.892675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.415  [2024-12-05T02:51:55.825Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:24.981 00:07:24.981 ************************************ 00:07:24.981 END TEST dd_rw 00:07:24.981 ************************************ 00:07:24.981 00:07:24.981 real 0m31.619s 00:07:24.981 user 0m26.710s 00:07:24.981 sys 0m14.717s 00:07:24.981 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.981 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.239 ************************************ 00:07:25.239 START TEST dd_rw_offset 00:07:25.239 ************************************ 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:25.239 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:25.240 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=06kfgz7tdec0r1wu2i2k0gvrsu7zyvhhbj2s7ygxg9aroldf23k0rznun0d262dn6i90xdvina4k28qg9e8ety04bcfqi6gxluo2gqqhizxxkvck2uvdyv0lg656p2sxr551y8o6gc30w483hkkmq418vhsio2xtnjb50r3bp1zzuak2e7olwuoj9jyxupzff13nsl4rigvf8c10ql1c3v2kj4b7mwn1fkaxvrbu1zm4q8i6r4uldoacumpho0ogvwdw9zfbap83iyt9bjj88eqmw9o3ncnjoti6sc0c1r0j3h3v2gx2s2rmkykmv5d14z9xucut2nm0yvb0tyxbtws7ujdvl0t8whdi05qbhxquhzng7emawbgl3g9ssdgmj1ba380qspvphrp9b7c17fwqi0ph2ibfgveflre1yeml9e86bv2lfvezs8mx9qf3s9exgi63ra7d7xyunh1xkmnnqycx4bpwm7968bcta3jaen9pg9kmhedltwxz66khohutlebswewitdjdog339o90ar3wpoy5w2p0fwucxsmjvamybzlge8v7vu0z4hlfsdjfe9bpmwrpoy6op3ptf3p90fwe53n5wdwianyq958303q8gas0fcmuyeqv1ejkdbyx5ze8twhm72l6y3rtlos1en9zzz708lyqhf8sbeba6xm70f6gdwy89cabtja5kfa9p13xjfpyls9ve2oohp8bql2hkv03kaaopxzkrhkuc4utpafeieyi22fj9bqjuus2l0h3urkzkqdieahrn12lj2d6fo9ftm0wh706p89tra7pl2cajcpsvd4d623u95cyy0lv0vo4zd68z096cx8b56zvkqkbifecw7gqkuuqnsibllcjl6jrd4lvd3fi1vlomhai0jdxvyfpo08108jn6vflbmix43bnfqtsmr5vtaw79hn3021wdx5yqxwjrcfg73qpv083fm5o1nejkxqqm6e7u49bj7iuycisyx69o37u0u3co6x1zhawkautkkdmsjtt4n2d4htpd9fov3ejr8f2fsgxcbxwuu76jusdgs7vy5i2pyerdae44yw84co3rfqlx30zskmelzfs0gfaplnj7orv9stqymcnszgc33tf6apusbrooiopa102pvwxqgsx3zy4wtm2doqgue0vzl7oyt6cphyw7ty0145xpiip1w3v7ys97f961ssytglhzf7qlt59ltoign8lsopgkdvhb6v4oas98p5y5fzxzxkwph3gvpp4z31xrfrbr8ce86sr3uict3dfhujk8aibec9uzjr2fatbzbv3esld7iktgfbmo84fa3mr4klh3ol8u8sca2zjselhfp58i4g4rs1de82lypdce7f462aioqrazlqzhzlhzogtfiltfiqckol5r9yyikcggo6nmzc3u4o9skj09t80by0j9xbcymzo2b4hyz9z8e2ha75besm6sfywor8s2r785cjwa2r1i75uug2exxyz17io8wxzbakskeodgvpf6tde09r81u3osdoe194n974ecs2ie719etpmctdmm5ptxfk2wqq16r19r7l3uwpiee7tuie1gfem1ntvafwsvy0xqnt2fdh7xxwf2cquj9tj9c9i60vr9jgxp6p4zgjaamob55qgscf01tzv8mjfyqfyk15ir2e4m14q7fp590xtjseuc0zozwhe9sanc9miagdfpxxapy3ulht9b01pyaqf74kno9rtfh7s2g6yifmkrnqmoanbmdfys9padtuyses8ct2e9xlal1ec12gaaig13b59r05hu06lpyzhtstkull5wax2qzwzzuvx89bgalq6rq0qg4z57km0ka09r0jmdosmzo47cxp2dwew0ix7tdp7g1amfpxt7alyyixkrntm3fd27gel0rsbvcd6rkfcy5qtl6mo73pyscv27vwml0r6m376jn8ldouoqwzyo9r310sy5gr32r7ufp4yfyg5du3kdo08379d8f24tsrmsivo78m881a3zbqwbypye3dymn02h42804vcl736dlpdfacu99f8rwlt0gpl1mdygtvrplpvfmxr8tyhmpdo3nzahkh97iy8qi54jvkeogexpv68wwzvshj7or6vsmxv5wjg4q2he0z4qdxin37f1zigvbmr97ff0big5iyzf13efqka0f3o8fhului0zw38ozposghuxn9e58g4o0ag8jokw2mca8xnmfjt1cmoqyi3cv6w0ot95n54hcpbx6hsvxoqrf1a15cfcp9kkbbt0e622agdrvuy6xaf3n89adthdz9fsugcusw1hn2brhnhg2020pdvqi84g86np7tq8jh7poy45py4535el1m0bls8mu5afu1r8pkd8xqn81tb9qfubkpxw30wwrni2ipgwhiwlxq928dhygjflcgu5v04hgjhrpw06qbklhwsd5plyqs5p7n5ouqii0ua69jilzwc4z8yadrvupfzk2xm2hj8u940gzppe18otnti47ep624gd56vk9tv7i2vqtgmd4oqjvc788ijahng27j19iye598hfyw2shomrxy7ctm7v351eajq9vth33btnhsxvs3rtkiqqx6hmvx07mg0zurbdgngpxcubp3g1ifio987zlevixqrqny94f6vu65lujlmvsz35bx5zivnyijre5e5rdfcidlust9cbu11rgkwgr5d3gem3nc9kuj6x7gx4gnnpg6udilig9g53tqnfr17567oq0y9otgk3llv4ysem9o1udmif8bd8qdjaec1hrz51m8qi0m6xek4itupr7vqcznx8sfk54f8t8oeiup2awattenwtsniis41zclqeqxhgo4335qje4pf9zdwbaz224gwndmrkcuzqjtd2d0ls4e1x91iz5521fkz0afqkpw0gotiz5fpt7y65vsa0x0v0w9cfp4sg8qbhump6jw1voetm90iljq0q8bmyzj506qmqbfhdb3mk133dgf0fdjlqxidl05bavqzo2795t5fxvp8y6ffj43rtqtqwm67js9eqn1qppzefe9zc41537wxk2xgidcq8szc7vadngg3kutz1s8whnlxucataa88udws9je3x8oky1fy4fllhji6916mm88trcloaoem5jv6p0x9nq9rfk66h6nnzs9myzliv80oo02o6k1o0wsj9dvidvm7qk55r8xfxu2owezw5xw2w0t936lkadi1emzclyz7dmdnbp8d4beima0rqag3b09b69gpwj8nqa1bppq8ssskywodm48ycrs8rxvb3psqkczt1y47ze0eyyposf3frpcnq5i8wcu4dps6lpfooxlhrdq8yazck98iei4n0staowzn1ea19g00vj165ojy3l7cfhr3g0kvmjh64hjk10apjkkfcvo4xopwpo9kb894wrnvozqg85eblpm6dumy6tt63jesczmor2c8w4of8dlvftxtl2urlu06obx4uo0g5rtijmy7zvk75zclbejw5qxzfgwn7bt8hm8dcvr4nocvhgr9qpkom3ysvtxyrme7w3qohqvzbpfq23xs0v1egvjk54pr0y8kk1hra12lg2xnthunsba4rmduqgdr147cn86ohg8e44d85nm01zfutbhsqw1b7d0l9khobgpyhk9ypkjvbj6xsyv53ony4splsijxchgrmuyac7g0087y2w4kj32vm5a3ld38dlez9mrdi9lazuo4hnvyqpvubqcatyncwrq61nsxofo1i46foio6n2wfj8f6mwevckg79h9hkgpiypfs85dxasoe5pm0y7de4xmcaxuj43z11a8nf84me52ps7pvzlgtbg24zemctaxeevnagqu4p0aqtrgj48l0knax2frxm2fyhr54ut8qe3n5inspg3pgwhoi5l6vkk04jq427mkdr3m51euvo1f77pweaa692dqb7gp2mdq1b1qs22bupr96todqa61fjeddbco3303m2f5e4pd8nq7p0xs34pa354ahfhzlm7ukf0ei0faa2u774axufolwmyzxl44jwo9m27jdaq3dlfbo 00:07:25.240 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:25.240 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:25.240 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:25.240 02:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:25.240 { 00:07:25.240 "subsystems": [ 00:07:25.240 { 00:07:25.240 "subsystem": "bdev", 00:07:25.240 "config": [ 00:07:25.240 { 00:07:25.240 "params": { 00:07:25.240 "trtype": "pcie", 00:07:25.240 "traddr": "0000:00:10.0", 00:07:25.240 "name": "Nvme0" 00:07:25.240 }, 00:07:25.240 "method": "bdev_nvme_attach_controller" 00:07:25.240 }, 00:07:25.240 { 00:07:25.240 "method": "bdev_wait_for_examine" 00:07:25.240 } 00:07:25.240 ] 00:07:25.240 } 00:07:25.240 ] 00:07:25.240 } 00:07:25.240 [2024-12-05 02:51:56.007962] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:25.240 [2024-12-05 02:51:56.008117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:07:25.498 [2024-12-05 02:51:56.169534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.498 [2024-12-05 02:51:56.252111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.757 [2024-12-05 02:51:56.409745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.758  [2024-12-05T02:51:57.541Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:26.697 00:07:26.697 02:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:26.697 02:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:26.697 02:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:26.697 02:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:26.697 { 00:07:26.697 "subsystems": [ 00:07:26.697 { 00:07:26.697 "subsystem": "bdev", 00:07:26.697 "config": [ 00:07:26.697 { 00:07:26.697 "params": { 00:07:26.697 "trtype": "pcie", 00:07:26.697 "traddr": "0000:00:10.0", 00:07:26.697 "name": "Nvme0" 00:07:26.697 }, 00:07:26.697 "method": "bdev_nvme_attach_controller" 00:07:26.697 }, 00:07:26.697 { 00:07:26.697 "method": "bdev_wait_for_examine" 00:07:26.697 } 00:07:26.697 ] 00:07:26.697 } 00:07:26.697 ] 00:07:26.697 } 00:07:26.956 [2024-12-05 02:51:57.540395] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:26.956 [2024-12-05 02:51:57.540574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61459 ] 00:07:26.956 [2024-12-05 02:51:57.718938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.215 [2024-12-05 02:51:57.800286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.215 [2024-12-05 02:51:57.954817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.474  [2024-12-05T02:51:58.887Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:28.043 00:07:28.303 02:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:28.303 ************************************ 00:07:28.303 END TEST dd_rw_offset 00:07:28.303 ************************************ 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 06kfgz7tdec0r1wu2i2k0gvrsu7zyvhhbj2s7ygxg9aroldf23k0rznun0d262dn6i90xdvina4k28qg9e8ety04bcfqi6gxluo2gqqhizxxkvck2uvdyv0lg656p2sxr551y8o6gc30w483hkkmq418vhsio2xtnjb50r3bp1zzuak2e7olwuoj9jyxupzff13nsl4rigvf8c10ql1c3v2kj4b7mwn1fkaxvrbu1zm4q8i6r4uldoacumpho0ogvwdw9zfbap83iyt9bjj88eqmw9o3ncnjoti6sc0c1r0j3h3v2gx2s2rmkykmv5d14z9xucut2nm0yvb0tyxbtws7ujdvl0t8whdi05qbhxquhzng7emawbgl3g9ssdgmj1ba380qspvphrp9b7c17fwqi0ph2ibfgveflre1yeml9e86bv2lfvezs8mx9qf3s9exgi63ra7d7xyunh1xkmnnqycx4bpwm7968bcta3jaen9pg9kmhedltwxz66khohutlebswewitdjdog339o90ar3wpoy5w2p0fwucxsmjvamybzlge8v7vu0z4hlfsdjfe9bpmwrpoy6op3ptf3p90fwe53n5wdwianyq958303q8gas0fcmuyeqv1ejkdbyx5ze8twhm72l6y3rtlos1en9zzz708lyqhf8sbeba6xm70f6gdwy89cabtja5kfa9p13xjfpyls9ve2oohp8bql2hkv03kaaopxzkrhkuc4utpafeieyi22fj9bqjuus2l0h3urkzkqdieahrn12lj2d6fo9ftm0wh706p89tra7pl2cajcpsvd4d623u95cyy0lv0vo4zd68z096cx8b56zvkqkbifecw7gqkuuqnsibllcjl6jrd4lvd3fi1vlomhai0jdxvyfpo08108jn6vflbmix43bnfqtsmr5vtaw79hn3021wdx5yqxwjrcfg73qpv083fm5o1nejkxqqm6e7u49bj7iuycisyx69o37u0u3co6x1zhawkautkkdmsjtt4n2d4htpd9fov3ejr8f2fsgxcbxwuu76jusdgs7vy5i2pyerdae44yw84co3rfqlx30zskmelzfs0gfaplnj7orv9stqymcnszgc33tf6apusbrooiopa102pvwxqgsx3zy4wtm2doqgue0vzl7oyt6cphyw7ty0145xpiip1w3v7ys97f961ssytglhzf7qlt59ltoign8lsopgkdvhb6v4oas98p5y5fzxzxkwph3gvpp4z31xrfrbr8ce86sr3uict3dfhujk8aibec9uzjr2fatbzbv3esld7iktgfbmo84fa3mr4klh3ol8u8sca2zjselhfp58i4g4rs1de82lypdce7f462aioqrazlqzhzlhzogtfiltfiqckol5r9yyikcggo6nmzc3u4o9skj09t80by0j9xbcymzo2b4hyz9z8e2ha75besm6sfywor8s2r785cjwa2r1i75uug2exxyz17io8wxzbakskeodgvpf6tde09r81u3osdoe194n974ecs2ie719etpmctdmm5ptxfk2wqq16r19r7l3uwpiee7tuie1gfem1ntvafwsvy0xqnt2fdh7xxwf2cquj9tj9c9i60vr9jgxp6p4zgjaamob55qgscf01tzv8mjfyqfyk15ir2e4m14q7fp590xtjseuc0zozwhe9sanc9miagdfpxxapy3ulht9b01pyaqf74kno9rtfh7s2g6yifmkrnqmoanbmdfys9padtuyses8ct2e9xlal1ec12gaaig13b59r05hu06lpyzhtstkull5wax2qzwzzuvx89bgalq6rq0qg4z57km0ka09r0jmdosmzo47cxp2dwew0ix7tdp7g1amfpxt7alyyixkrntm3fd27gel0rsbvcd6rkfcy5qtl6mo73pyscv27vwml0r6m376jn8ldouoqwzyo9r310sy5gr32r7ufp4yfyg5du3kdo08379d8f24tsrmsivo78m881a3zbqwbypye3dymn02h42804vcl736dlpdfacu99f8rwlt0gpl1mdygtvrplpvfmxr8tyhmpdo3nzahkh97iy8qi54jvkeogexpv68wwzvshj7or6vsmxv5wjg4q2he0z4qdxin37f1zigvbmr97ff0big5iyzf13efqka0f3o8fhului0zw38ozposghuxn9e58g4o0ag8jokw2mca8xnmfjt1cmoqyi3cv6w0ot95n54hcpbx6hsvxoqrf1a15cfcp9kkbbt0e622agdrvuy6xaf3n89adthdz9fsugcusw1hn2brhnhg2020pdvqi84g86np7tq8jh7poy45py4535el1m0bls8mu5afu1r8pkd8xqn81tb9qfubkpxw30wwrni2ipgwhiwlxq928dhygjflcgu5v04hgjhrpw06qbklhwsd5plyqs5p7n5ouqii0ua69jilzwc4z8yadrvupfzk2xm2hj8u940gzppe18otnti47ep624gd56vk9tv7i2vqtgmd4oqjvc788ijahng27j19iye598hfyw2shomrxy7ctm7v351eajq9vth33btnhsxvs3rtkiqqx6hmvx07mg0zurbdgngpxcubp3g1ifio987zlevixqrqny94f6vu65lujlmvsz35bx5zivnyijre5e5rdfcidlust9cbu11rgkwgr5d3gem3nc9kuj6x7gx4gnnpg6udilig9g53tqnfr17567oq0y9otgk3llv4ysem9o1udmif8bd8qdjaec1hrz51m8qi0m6xek4itupr7vqcznx8sfk54f8t8oeiup2awattenwtsniis41zclqeqxhgo4335qje4pf9zdwbaz224gwndmrkcuzqjtd2d0ls4e1x91iz5521fkz0afqkpw0gotiz5fpt7y65vsa0x0v0w9cfp4sg8qbhump6jw1voetm90iljq0q8bmyzj506qmqbfhdb3mk133dgf0fdjlqxidl05bavqzo2795t5fxvp8y6ffj43rtqtqwm67js9eqn1qppzefe9zc41537wxk2xgidcq8szc7vadngg3kutz1s8whnlxucataa88udws9je3x8oky1fy4fllhji6916mm88trcloaoem5jv6p0x9nq9rfk66h6nnzs9myzliv80oo02o6k1o0wsj9dvidvm7qk55r8xfxu2owezw5xw2w0t936lkadi1emzclyz7dmdnbp8d4beima0rqag3b09b69gpwj8nqa1bppq8ssskywodm48ycrs8rxvb3psqkczt1y47ze0eyyposf3frpcnq5i8wcu4dps6lpfooxlhrdq8yazck98iei4n0staowzn1ea19g00vj165ojy3l7cfhr3g0kvmjh64hjk10apjkkfcvo4xopwpo9kb894wrnvozqg85eblpm6dumy6tt63jesczmor2c8w4of8dlvftxtl2urlu06obx4uo0g5rtijmy7zvk75zclbejw5qxzfgwn7bt8hm8dcvr4nocvhgr9qpkom3ysvtxyrme7w3qohqvzbpfq23xs0v1egvjk54pr0y8kk1hra12lg2xnthunsba4rmduqgdr147cn86ohg8e44d85nm01zfutbhsqw1b7d0l9khobgpyhk9ypkjvbj6xsyv53ony4splsijxchgrmuyac7g0087y2w4kj32vm5a3ld38dlez9mrdi9lazuo4hnvyqpvubqcatyncwrq61nsxofo1i46foio6n2wfj8f6mwevckg79h9hkgpiypfs85dxasoe5pm0y7de4xmcaxuj43z11a8nf84me52ps7pvzlgtbg24zemctaxeevnagqu4p0aqtrgj48l0knax2frxm2fyhr54ut8qe3n5inspg3pgwhoi5l6vkk04jq427mkdr3m51euvo1f77pweaa692dqb7gp2mdq1b1qs22bupr96todqa61fjeddbco3303m2f5e4pd8nq7p0xs34pa354ahfhzlm7ukf0ei0faa2u774axufolwmyzxl44jwo9m27jdaq3dlfbo == \0\6\k\f\g\z\7\t\d\e\c\0\r\1\w\u\2\i\2\k\0\g\v\r\s\u\7\z\y\v\h\h\b\j\2\s\7\y\g\x\g\9\a\r\o\l\d\f\2\3\k\0\r\z\n\u\n\0\d\2\6\2\d\n\6\i\9\0\x\d\v\i\n\a\4\k\2\8\q\g\9\e\8\e\t\y\0\4\b\c\f\q\i\6\g\x\l\u\o\2\g\q\q\h\i\z\x\x\k\v\c\k\2\u\v\d\y\v\0\l\g\6\5\6\p\2\s\x\r\5\5\1\y\8\o\6\g\c\3\0\w\4\8\3\h\k\k\m\q\4\1\8\v\h\s\i\o\2\x\t\n\j\b\5\0\r\3\b\p\1\z\z\u\a\k\2\e\7\o\l\w\u\o\j\9\j\y\x\u\p\z\f\f\1\3\n\s\l\4\r\i\g\v\f\8\c\1\0\q\l\1\c\3\v\2\k\j\4\b\7\m\w\n\1\f\k\a\x\v\r\b\u\1\z\m\4\q\8\i\6\r\4\u\l\d\o\a\c\u\m\p\h\o\0\o\g\v\w\d\w\9\z\f\b\a\p\8\3\i\y\t\9\b\j\j\8\8\e\q\m\w\9\o\3\n\c\n\j\o\t\i\6\s\c\0\c\1\r\0\j\3\h\3\v\2\g\x\2\s\2\r\m\k\y\k\m\v\5\d\1\4\z\9\x\u\c\u\t\2\n\m\0\y\v\b\0\t\y\x\b\t\w\s\7\u\j\d\v\l\0\t\8\w\h\d\i\0\5\q\b\h\x\q\u\h\z\n\g\7\e\m\a\w\b\g\l\3\g\9\s\s\d\g\m\j\1\b\a\3\8\0\q\s\p\v\p\h\r\p\9\b\7\c\1\7\f\w\q\i\0\p\h\2\i\b\f\g\v\e\f\l\r\e\1\y\e\m\l\9\e\8\6\b\v\2\l\f\v\e\z\s\8\m\x\9\q\f\3\s\9\e\x\g\i\6\3\r\a\7\d\7\x\y\u\n\h\1\x\k\m\n\n\q\y\c\x\4\b\p\w\m\7\9\6\8\b\c\t\a\3\j\a\e\n\9\p\g\9\k\m\h\e\d\l\t\w\x\z\6\6\k\h\o\h\u\t\l\e\b\s\w\e\w\i\t\d\j\d\o\g\3\3\9\o\9\0\a\r\3\w\p\o\y\5\w\2\p\0\f\w\u\c\x\s\m\j\v\a\m\y\b\z\l\g\e\8\v\7\v\u\0\z\4\h\l\f\s\d\j\f\e\9\b\p\m\w\r\p\o\y\6\o\p\3\p\t\f\3\p\9\0\f\w\e\5\3\n\5\w\d\w\i\a\n\y\q\9\5\8\3\0\3\q\8\g\a\s\0\f\c\m\u\y\e\q\v\1\e\j\k\d\b\y\x\5\z\e\8\t\w\h\m\7\2\l\6\y\3\r\t\l\o\s\1\e\n\9\z\z\z\7\0\8\l\y\q\h\f\8\s\b\e\b\a\6\x\m\7\0\f\6\g\d\w\y\8\9\c\a\b\t\j\a\5\k\f\a\9\p\1\3\x\j\f\p\y\l\s\9\v\e\2\o\o\h\p\8\b\q\l\2\h\k\v\0\3\k\a\a\o\p\x\z\k\r\h\k\u\c\4\u\t\p\a\f\e\i\e\y\i\2\2\f\j\9\b\q\j\u\u\s\2\l\0\h\3\u\r\k\z\k\q\d\i\e\a\h\r\n\1\2\l\j\2\d\6\f\o\9\f\t\m\0\w\h\7\0\6\p\8\9\t\r\a\7\p\l\2\c\a\j\c\p\s\v\d\4\d\6\2\3\u\9\5\c\y\y\0\l\v\0\v\o\4\z\d\6\8\z\0\9\6\c\x\8\b\5\6\z\v\k\q\k\b\i\f\e\c\w\7\g\q\k\u\u\q\n\s\i\b\l\l\c\j\l\6\j\r\d\4\l\v\d\3\f\i\1\v\l\o\m\h\a\i\0\j\d\x\v\y\f\p\o\0\8\1\0\8\j\n\6\v\f\l\b\m\i\x\4\3\b\n\f\q\t\s\m\r\5\v\t\a\w\7\9\h\n\3\0\2\1\w\d\x\5\y\q\x\w\j\r\c\f\g\7\3\q\p\v\0\8\3\f\m\5\o\1\n\e\j\k\x\q\q\m\6\e\7\u\4\9\b\j\7\i\u\y\c\i\s\y\x\6\9\o\3\7\u\0\u\3\c\o\6\x\1\z\h\a\w\k\a\u\t\k\k\d\m\s\j\t\t\4\n\2\d\4\h\t\p\d\9\f\o\v\3\e\j\r\8\f\2\f\s\g\x\c\b\x\w\u\u\7\6\j\u\s\d\g\s\7\v\y\5\i\2\p\y\e\r\d\a\e\4\4\y\w\8\4\c\o\3\r\f\q\l\x\3\0\z\s\k\m\e\l\z\f\s\0\g\f\a\p\l\n\j\7\o\r\v\9\s\t\q\y\m\c\n\s\z\g\c\3\3\t\f\6\a\p\u\s\b\r\o\o\i\o\p\a\1\0\2\p\v\w\x\q\g\s\x\3\z\y\4\w\t\m\2\d\o\q\g\u\e\0\v\z\l\7\o\y\t\6\c\p\h\y\w\7\t\y\0\1\4\5\x\p\i\i\p\1\w\3\v\7\y\s\9\7\f\9\6\1\s\s\y\t\g\l\h\z\f\7\q\l\t\5\9\l\t\o\i\g\n\8\l\s\o\p\g\k\d\v\h\b\6\v\4\o\a\s\9\8\p\5\y\5\f\z\x\z\x\k\w\p\h\3\g\v\p\p\4\z\3\1\x\r\f\r\b\r\8\c\e\8\6\s\r\3\u\i\c\t\3\d\f\h\u\j\k\8\a\i\b\e\c\9\u\z\j\r\2\f\a\t\b\z\b\v\3\e\s\l\d\7\i\k\t\g\f\b\m\o\8\4\f\a\3\m\r\4\k\l\h\3\o\l\8\u\8\s\c\a\2\z\j\s\e\l\h\f\p\5\8\i\4\g\4\r\s\1\d\e\8\2\l\y\p\d\c\e\7\f\4\6\2\a\i\o\q\r\a\z\l\q\z\h\z\l\h\z\o\g\t\f\i\l\t\f\i\q\c\k\o\l\5\r\9\y\y\i\k\c\g\g\o\6\n\m\z\c\3\u\4\o\9\s\k\j\0\9\t\8\0\b\y\0\j\9\x\b\c\y\m\z\o\2\b\4\h\y\z\9\z\8\e\2\h\a\7\5\b\e\s\m\6\s\f\y\w\o\r\8\s\2\r\7\8\5\c\j\w\a\2\r\1\i\7\5\u\u\g\2\e\x\x\y\z\1\7\i\o\8\w\x\z\b\a\k\s\k\e\o\d\g\v\p\f\6\t\d\e\0\9\r\8\1\u\3\o\s\d\o\e\1\9\4\n\9\7\4\e\c\s\2\i\e\7\1\9\e\t\p\m\c\t\d\m\m\5\p\t\x\f\k\2\w\q\q\1\6\r\1\9\r\7\l\3\u\w\p\i\e\e\7\t\u\i\e\1\g\f\e\m\1\n\t\v\a\f\w\s\v\y\0\x\q\n\t\2\f\d\h\7\x\x\w\f\2\c\q\u\j\9\t\j\9\c\9\i\6\0\v\r\9\j\g\x\p\6\p\4\z\g\j\a\a\m\o\b\5\5\q\g\s\c\f\0\1\t\z\v\8\m\j\f\y\q\f\y\k\1\5\i\r\2\e\4\m\1\4\q\7\f\p\5\9\0\x\t\j\s\e\u\c\0\z\o\z\w\h\e\9\s\a\n\c\9\m\i\a\g\d\f\p\x\x\a\p\y\3\u\l\h\t\9\b\0\1\p\y\a\q\f\7\4\k\n\o\9\r\t\f\h\7\s\2\g\6\y\i\f\m\k\r\n\q\m\o\a\n\b\m\d\f\y\s\9\p\a\d\t\u\y\s\e\s\8\c\t\2\e\9\x\l\a\l\1\e\c\1\2\g\a\a\i\g\1\3\b\5\9\r\0\5\h\u\0\6\l\p\y\z\h\t\s\t\k\u\l\l\5\w\a\x\2\q\z\w\z\z\u\v\x\8\9\b\g\a\l\q\6\r\q\0\q\g\4\z\5\7\k\m\0\k\a\0\9\r\0\j\m\d\o\s\m\z\o\4\7\c\x\p\2\d\w\e\w\0\i\x\7\t\d\p\7\g\1\a\m\f\p\x\t\7\a\l\y\y\i\x\k\r\n\t\m\3\f\d\2\7\g\e\l\0\r\s\b\v\c\d\6\r\k\f\c\y\5\q\t\l\6\m\o\7\3\p\y\s\c\v\2\7\v\w\m\l\0\r\6\m\3\7\6\j\n\8\l\d\o\u\o\q\w\z\y\o\9\r\3\1\0\s\y\5\g\r\3\2\r\7\u\f\p\4\y\f\y\g\5\d\u\3\k\d\o\0\8\3\7\9\d\8\f\2\4\t\s\r\m\s\i\v\o\7\8\m\8\8\1\a\3\z\b\q\w\b\y\p\y\e\3\d\y\m\n\0\2\h\4\2\8\0\4\v\c\l\7\3\6\d\l\p\d\f\a\c\u\9\9\f\8\r\w\l\t\0\g\p\l\1\m\d\y\g\t\v\r\p\l\p\v\f\m\x\r\8\t\y\h\m\p\d\o\3\n\z\a\h\k\h\9\7\i\y\8\q\i\5\4\j\v\k\e\o\g\e\x\p\v\6\8\w\w\z\v\s\h\j\7\o\r\6\v\s\m\x\v\5\w\j\g\4\q\2\h\e\0\z\4\q\d\x\i\n\3\7\f\1\z\i\g\v\b\m\r\9\7\f\f\0\b\i\g\5\i\y\z\f\1\3\e\f\q\k\a\0\f\3\o\8\f\h\u\l\u\i\0\z\w\3\8\o\z\p\o\s\g\h\u\x\n\9\e\5\8\g\4\o\0\a\g\8\j\o\k\w\2\m\c\a\8\x\n\m\f\j\t\1\c\m\o\q\y\i\3\c\v\6\w\0\o\t\9\5\n\5\4\h\c\p\b\x\6\h\s\v\x\o\q\r\f\1\a\1\5\c\f\c\p\9\k\k\b\b\t\0\e\6\2\2\a\g\d\r\v\u\y\6\x\a\f\3\n\8\9\a\d\t\h\d\z\9\f\s\u\g\c\u\s\w\1\h\n\2\b\r\h\n\h\g\2\0\2\0\p\d\v\q\i\8\4\g\8\6\n\p\7\t\q\8\j\h\7\p\o\y\4\5\p\y\4\5\3\5\e\l\1\m\0\b\l\s\8\m\u\5\a\f\u\1\r\8\p\k\d\8\x\q\n\8\1\t\b\9\q\f\u\b\k\p\x\w\3\0\w\w\r\n\i\2\i\p\g\w\h\i\w\l\x\q\9\2\8\d\h\y\g\j\f\l\c\g\u\5\v\0\4\h\g\j\h\r\p\w\0\6\q\b\k\l\h\w\s\d\5\p\l\y\q\s\5\p\7\n\5\o\u\q\i\i\0\u\a\6\9\j\i\l\z\w\c\4\z\8\y\a\d\r\v\u\p\f\z\k\2\x\m\2\h\j\8\u\9\4\0\g\z\p\p\e\1\8\o\t\n\t\i\4\7\e\p\6\2\4\g\d\5\6\v\k\9\t\v\7\i\2\v\q\t\g\m\d\4\o\q\j\v\c\7\8\8\i\j\a\h\n\g\2\7\j\1\9\i\y\e\5\9\8\h\f\y\w\2\s\h\o\m\r\x\y\7\c\t\m\7\v\3\5\1\e\a\j\q\9\v\t\h\3\3\b\t\n\h\s\x\v\s\3\r\t\k\i\q\q\x\6\h\m\v\x\0\7\m\g\0\z\u\r\b\d\g\n\g\p\x\c\u\b\p\3\g\1\i\f\i\o\9\8\7\z\l\e\v\i\x\q\r\q\n\y\9\4\f\6\v\u\6\5\l\u\j\l\m\v\s\z\3\5\b\x\5\z\i\v\n\y\i\j\r\e\5\e\5\r\d\f\c\i\d\l\u\s\t\9\c\b\u\1\1\r\g\k\w\g\r\5\d\3\g\e\m\3\n\c\9\k\u\j\6\x\7\g\x\4\g\n\n\p\g\6\u\d\i\l\i\g\9\g\5\3\t\q\n\f\r\1\7\5\6\7\o\q\0\y\9\o\t\g\k\3\l\l\v\4\y\s\e\m\9\o\1\u\d\m\i\f\8\b\d\8\q\d\j\a\e\c\1\h\r\z\5\1\m\8\q\i\0\m\6\x\e\k\4\i\t\u\p\r\7\v\q\c\z\n\x\8\s\f\k\5\4\f\8\t\8\o\e\i\u\p\2\a\w\a\t\t\e\n\w\t\s\n\i\i\s\4\1\z\c\l\q\e\q\x\h\g\o\4\3\3\5\q\j\e\4\p\f\9\z\d\w\b\a\z\2\2\4\g\w\n\d\m\r\k\c\u\z\q\j\t\d\2\d\0\l\s\4\e\1\x\9\1\i\z\5\5\2\1\f\k\z\0\a\f\q\k\p\w\0\g\o\t\i\z\5\f\p\t\7\y\6\5\v\s\a\0\x\0\v\0\w\9\c\f\p\4\s\g\8\q\b\h\u\m\p\6\j\w\1\v\o\e\t\m\9\0\i\l\j\q\0\q\8\b\m\y\z\j\5\0\6\q\m\q\b\f\h\d\b\3\m\k\1\3\3\d\g\f\0\f\d\j\l\q\x\i\d\l\0\5\b\a\v\q\z\o\2\7\9\5\t\5\f\x\v\p\8\y\6\f\f\j\4\3\r\t\q\t\q\w\m\6\7\j\s\9\e\q\n\1\q\p\p\z\e\f\e\9\z\c\4\1\5\3\7\w\x\k\2\x\g\i\d\c\q\8\s\z\c\7\v\a\d\n\g\g\3\k\u\t\z\1\s\8\w\h\n\l\x\u\c\a\t\a\a\8\8\u\d\w\s\9\j\e\3\x\8\o\k\y\1\f\y\4\f\l\l\h\j\i\6\9\1\6\m\m\8\8\t\r\c\l\o\a\o\e\m\5\j\v\6\p\0\x\9\n\q\9\r\f\k\6\6\h\6\n\n\z\s\9\m\y\z\l\i\v\8\0\o\o\0\2\o\6\k\1\o\0\w\s\j\9\d\v\i\d\v\m\7\q\k\5\5\r\8\x\f\x\u\2\o\w\e\z\w\5\x\w\2\w\0\t\9\3\6\l\k\a\d\i\1\e\m\z\c\l\y\z\7\d\m\d\n\b\p\8\d\4\b\e\i\m\a\0\r\q\a\g\3\b\0\9\b\6\9\g\p\w\j\8\n\q\a\1\b\p\p\q\8\s\s\s\k\y\w\o\d\m\4\8\y\c\r\s\8\r\x\v\b\3\p\s\q\k\c\z\t\1\y\4\7\z\e\0\e\y\y\p\o\s\f\3\f\r\p\c\n\q\5\i\8\w\c\u\4\d\p\s\6\l\p\f\o\o\x\l\h\r\d\q\8\y\a\z\c\k\9\8\i\e\i\4\n\0\s\t\a\o\w\z\n\1\e\a\1\9\g\0\0\v\j\1\6\5\o\j\y\3\l\7\c\f\h\r\3\g\0\k\v\m\j\h\6\4\h\j\k\1\0\a\p\j\k\k\f\c\v\o\4\x\o\p\w\p\o\9\k\b\8\9\4\w\r\n\v\o\z\q\g\8\5\e\b\l\p\m\6\d\u\m\y\6\t\t\6\3\j\e\s\c\z\m\o\r\2\c\8\w\4\o\f\8\d\l\v\f\t\x\t\l\2\u\r\l\u\0\6\o\b\x\4\u\o\0\g\5\r\t\i\j\m\y\7\z\v\k\7\5\z\c\l\b\e\j\w\5\q\x\z\f\g\w\n\7\b\t\8\h\m\8\d\c\v\r\4\n\o\c\v\h\g\r\9\q\p\k\o\m\3\y\s\v\t\x\y\r\m\e\7\w\3\q\o\h\q\v\z\b\p\f\q\2\3\x\s\0\v\1\e\g\v\j\k\5\4\p\r\0\y\8\k\k\1\h\r\a\1\2\l\g\2\x\n\t\h\u\n\s\b\a\4\r\m\d\u\q\g\d\r\1\4\7\c\n\8\6\o\h\g\8\e\4\4\d\8\5\n\m\0\1\z\f\u\t\b\h\s\q\w\1\b\7\d\0\l\9\k\h\o\b\g\p\y\h\k\9\y\p\k\j\v\b\j\6\x\s\y\v\5\3\o\n\y\4\s\p\l\s\i\j\x\c\h\g\r\m\u\y\a\c\7\g\0\0\8\7\y\2\w\4\k\j\3\2\v\m\5\a\3\l\d\3\8\d\l\e\z\9\m\r\d\i\9\l\a\z\u\o\4\h\n\v\y\q\p\v\u\b\q\c\a\t\y\n\c\w\r\q\6\1\n\s\x\o\f\o\1\i\4\6\f\o\i\o\6\n\2\w\f\j\8\f\6\m\w\e\v\c\k\g\7\9\h\9\h\k\g\p\i\y\p\f\s\8\5\d\x\a\s\o\e\5\p\m\0\y\7\d\e\4\x\m\c\a\x\u\j\4\3\z\1\1\a\8\n\f\8\4\m\e\5\2\p\s\7\p\v\z\l\g\t\b\g\2\4\z\e\m\c\t\a\x\e\e\v\n\a\g\q\u\4\p\0\a\q\t\r\g\j\4\8\l\0\k\n\a\x\2\f\r\x\m\2\f\y\h\r\5\4\u\t\8\q\e\3\n\5\i\n\s\p\g\3\p\g\w\h\o\i\5\l\6\v\k\k\0\4\j\q\4\2\7\m\k\d\r\3\m\5\1\e\u\v\o\1\f\7\7\p\w\e\a\a\6\9\2\d\q\b\7\g\p\2\m\d\q\1\b\1\q\s\2\2\b\u\p\r\9\6\t\o\d\q\a\6\1\f\j\e\d\d\b\c\o\3\3\0\3\m\2\f\5\e\4\p\d\8\n\q\7\p\0\x\s\3\4\p\a\3\5\4\a\h\f\h\z\l\m\7\u\k\f\0\e\i\0\f\a\a\2\u\7\7\4\a\x\u\f\o\l\w\m\y\z\x\l\4\4\j\w\o\9\m\2\7\j\d\a\q\3\d\l\f\b\o ]] 00:07:28.304 00:07:28.304 real 0m3.021s 00:07:28.304 user 0m2.525s 00:07:28.304 sys 0m1.568s 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.304 02:51:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.304 { 00:07:28.304 "subsystems": [ 00:07:28.304 { 00:07:28.304 "subsystem": "bdev", 00:07:28.304 "config": [ 00:07:28.304 { 00:07:28.304 "params": { 00:07:28.304 "trtype": "pcie", 00:07:28.304 "traddr": "0000:00:10.0", 00:07:28.304 "name": "Nvme0" 00:07:28.304 }, 00:07:28.304 "method": "bdev_nvme_attach_controller" 00:07:28.304 }, 00:07:28.304 { 00:07:28.304 "method": "bdev_wait_for_examine" 00:07:28.304 } 00:07:28.304 ] 00:07:28.304 } 00:07:28.304 ] 00:07:28.304 } 00:07:28.304 [2024-12-05 02:51:59.028270] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:28.304 [2024-12-05 02:51:59.028406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 00:07:28.563 [2024-12-05 02:51:59.187574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.563 [2024-12-05 02:51:59.277653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.822 [2024-12-05 02:51:59.426400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.822  [2024-12-05T02:52:00.599Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:29.756 00:07:29.756 02:52:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.756 ************************************ 00:07:29.756 END TEST spdk_dd_basic_rw 00:07:29.756 ************************************ 00:07:29.756 00:07:29.756 real 0m38.397s 00:07:29.756 user 0m32.091s 00:07:29.756 sys 0m17.601s 00:07:29.756 02:52:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.756 02:52:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.756 02:52:00 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:29.756 02:52:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.756 02:52:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.756 02:52:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.756 ************************************ 00:07:29.756 START TEST spdk_dd_posix 00:07:29.756 ************************************ 00:07:29.756 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:29.756 * Looking for test storage... 00:07:29.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.756 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.756 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.756 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.015 --rc genhtml_branch_coverage=1 00:07:30.015 --rc genhtml_function_coverage=1 00:07:30.015 --rc genhtml_legend=1 00:07:30.015 --rc geninfo_all_blocks=1 00:07:30.015 --rc geninfo_unexecuted_blocks=1 00:07:30.015 00:07:30.015 ' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.015 --rc genhtml_branch_coverage=1 00:07:30.015 --rc genhtml_function_coverage=1 00:07:30.015 --rc genhtml_legend=1 00:07:30.015 --rc geninfo_all_blocks=1 00:07:30.015 --rc geninfo_unexecuted_blocks=1 00:07:30.015 00:07:30.015 ' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.015 --rc genhtml_branch_coverage=1 00:07:30.015 --rc genhtml_function_coverage=1 00:07:30.015 --rc genhtml_legend=1 00:07:30.015 --rc geninfo_all_blocks=1 00:07:30.015 --rc geninfo_unexecuted_blocks=1 00:07:30.015 00:07:30.015 ' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.015 --rc genhtml_branch_coverage=1 00:07:30.015 --rc genhtml_function_coverage=1 00:07:30.015 --rc genhtml_legend=1 00:07:30.015 --rc geninfo_all_blocks=1 00:07:30.015 --rc geninfo_unexecuted_blocks=1 00:07:30.015 00:07:30.015 ' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:30.015 * First test run, liburing in use 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.015 ************************************ 00:07:30.015 START TEST dd_flag_append 00:07:30.015 ************************************ 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=twlgi2biw9fql5oep1ccil0xh2ok4h78 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=95brr5sr9vm2lts2yz52ffpu2oj2jhi9 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s twlgi2biw9fql5oep1ccil0xh2ok4h78 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 95brr5sr9vm2lts2yz52ffpu2oj2jhi9 00:07:30.015 02:52:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:30.015 [2024-12-05 02:52:00.823149] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:30.015 [2024-12-05 02:52:00.824100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61584 ] 00:07:30.274 [2024-12-05 02:52:00.998961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.274 [2024-12-05 02:52:01.079395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.533 [2024-12-05 02:52:01.222955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.533  [2024-12-05T02:52:02.313Z] Copying: 32/32 [B] (average 31 kBps) 00:07:31.469 00:07:31.469 ************************************ 00:07:31.469 END TEST dd_flag_append 00:07:31.469 ************************************ 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 95brr5sr9vm2lts2yz52ffpu2oj2jhi9twlgi2biw9fql5oep1ccil0xh2ok4h78 == \9\5\b\r\r\5\s\r\9\v\m\2\l\t\s\2\y\z\5\2\f\f\p\u\2\o\j\2\j\h\i\9\t\w\l\g\i\2\b\i\w\9\f\q\l\5\o\e\p\1\c\c\i\l\0\x\h\2\o\k\4\h\7\8 ]] 00:07:31.469 00:07:31.469 real 0m1.445s 00:07:31.469 user 0m1.142s 00:07:31.469 sys 0m0.810s 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.469 ************************************ 00:07:31.469 START TEST dd_flag_directory 00:07:31.469 ************************************ 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.469 02:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.729 [2024-12-05 02:52:02.320176] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:31.729 [2024-12-05 02:52:02.320348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61619 ] 00:07:31.729 [2024-12-05 02:52:02.497160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.988 [2024-12-05 02:52:02.584166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.988 [2024-12-05 02:52:02.745247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.988 [2024-12-05 02:52:02.830191] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.988 [2024-12-05 02:52:02.830267] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.988 [2024-12-05 02:52:02.830290] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.934 [2024-12-05 02:52:03.415906] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.935 02:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.935 [2024-12-05 02:52:03.768118] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:32.935 [2024-12-05 02:52:03.768333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61646 ] 00:07:33.209 [2024-12-05 02:52:03.948188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.209 [2024-12-05 02:52:04.041534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.468 [2024-12-05 02:52:04.188659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.468 [2024-12-05 02:52:04.274245] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:33.468 [2024-12-05 02:52:04.274322] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:33.468 [2024-12-05 02:52:04.274346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.406 [2024-12-05 02:52:04.909581] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:34.406 ************************************ 00:07:34.406 END TEST dd_flag_directory 00:07:34.406 ************************************ 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.406 00:07:34.406 real 0m2.932s 00:07:34.406 user 0m2.317s 00:07:34.406 sys 0m0.393s 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:34.406 ************************************ 00:07:34.406 START TEST dd_flag_nofollow 00:07:34.406 ************************************ 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.406 02:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.666 [2024-12-05 02:52:05.317431] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:34.666 [2024-12-05 02:52:05.317619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:07:34.666 [2024-12-05 02:52:05.497961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.925 [2024-12-05 02:52:05.582362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.925 [2024-12-05 02:52:05.748491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.185 [2024-12-05 02:52:05.837317] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.185 [2024-12-05 02:52:05.837396] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.185 [2024-12-05 02:52:05.837420] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.754 [2024-12-05 02:52:06.428206] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:36.013 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.014 02:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.014 [2024-12-05 02:52:06.765247] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:36.014 [2024-12-05 02:52:06.765407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:07:36.274 [2024-12-05 02:52:06.946912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.274 [2024-12-05 02:52:07.031981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.533 [2024-12-05 02:52:07.181517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.533 [2024-12-05 02:52:07.266179] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.533 [2024-12-05 02:52:07.266253] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.533 [2024-12-05 02:52:07.266276] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.102 [2024-12-05 02:52:07.865408] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:37.362 02:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.621 [2024-12-05 02:52:08.218485] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:37.621 [2024-12-05 02:52:08.218665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:07:37.621 [2024-12-05 02:52:08.392717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.880 [2024-12-05 02:52:08.484113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.880 [2024-12-05 02:52:08.628058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.880  [2024-12-05T02:52:09.662Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.818 00:07:38.818 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ wgb5enflakvan8od4tahpnwc2jbp3y3quprqls14drt8l8kofbzy8s85l8l8zlbzny97n9nxft32jcbvw21q5z42p4336ofcf30fd9tzxq9nt4f61h2r4rdbd7uqjr0lhi3r8nqr3lv8exfetvb1gc8yvrrndrhyyspxjeohwxak4i9rreftp5czrhjuga90uolmhudigrh2d1praq8zm1jtwyfym0hmdv7qm9c4fstm9a9j1ks1jktl5g5lnp2u9weai01ml2jprmbfng8rzs1l5hb7ta4gqokcopu07ql7sfkfqxa6qfxdc67ddduubgp3li5qc8yp0rjaq73v8u6mcvqqsx5pn5mimizopho91w7yj27eqqjwvcy3ck2q0injftfiewpd0micm92i4tqiulyhrcz3zfdkaudd3hivsucovzytngdzidacsf6njyqs92opauwjfwz3gqbz0cm4ozqs6inu0k7yriud2d23fqi96p0sjalneim39ccg == \w\g\b\5\e\n\f\l\a\k\v\a\n\8\o\d\4\t\a\h\p\n\w\c\2\j\b\p\3\y\3\q\u\p\r\q\l\s\1\4\d\r\t\8\l\8\k\o\f\b\z\y\8\s\8\5\l\8\l\8\z\l\b\z\n\y\9\7\n\9\n\x\f\t\3\2\j\c\b\v\w\2\1\q\5\z\4\2\p\4\3\3\6\o\f\c\f\3\0\f\d\9\t\z\x\q\9\n\t\4\f\6\1\h\2\r\4\r\d\b\d\7\u\q\j\r\0\l\h\i\3\r\8\n\q\r\3\l\v\8\e\x\f\e\t\v\b\1\g\c\8\y\v\r\r\n\d\r\h\y\y\s\p\x\j\e\o\h\w\x\a\k\4\i\9\r\r\e\f\t\p\5\c\z\r\h\j\u\g\a\9\0\u\o\l\m\h\u\d\i\g\r\h\2\d\1\p\r\a\q\8\z\m\1\j\t\w\y\f\y\m\0\h\m\d\v\7\q\m\9\c\4\f\s\t\m\9\a\9\j\1\k\s\1\j\k\t\l\5\g\5\l\n\p\2\u\9\w\e\a\i\0\1\m\l\2\j\p\r\m\b\f\n\g\8\r\z\s\1\l\5\h\b\7\t\a\4\g\q\o\k\c\o\p\u\0\7\q\l\7\s\f\k\f\q\x\a\6\q\f\x\d\c\6\7\d\d\d\u\u\b\g\p\3\l\i\5\q\c\8\y\p\0\r\j\a\q\7\3\v\8\u\6\m\c\v\q\q\s\x\5\p\n\5\m\i\m\i\z\o\p\h\o\9\1\w\7\y\j\2\7\e\q\q\j\w\v\c\y\3\c\k\2\q\0\i\n\j\f\t\f\i\e\w\p\d\0\m\i\c\m\9\2\i\4\t\q\i\u\l\y\h\r\c\z\3\z\f\d\k\a\u\d\d\3\h\i\v\s\u\c\o\v\z\y\t\n\g\d\z\i\d\a\c\s\f\6\n\j\y\q\s\9\2\o\p\a\u\w\j\f\w\z\3\g\q\b\z\0\c\m\4\o\z\q\s\6\i\n\u\0\k\7\y\r\i\u\d\2\d\2\3\f\q\i\9\6\p\0\s\j\a\l\n\e\i\m\3\9\c\c\g ]] 00:07:38.818 00:07:38.818 real 0m4.367s 00:07:38.818 user 0m3.451s 00:07:38.818 sys 0m1.210s 00:07:38.818 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.818 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:38.818 ************************************ 00:07:38.818 END TEST dd_flag_nofollow 00:07:38.818 ************************************ 00:07:38.818 02:52:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:38.819 ************************************ 00:07:38.819 START TEST dd_flag_noatime 00:07:38.819 ************************************ 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733367128 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733367129 00:07:38.819 02:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:40.196 02:52:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.196 [2024-12-05 02:52:10.750893] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:40.196 [2024-12-05 02:52:10.751050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61771 ] 00:07:40.196 [2024-12-05 02:52:10.928892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.196 [2024-12-05 02:52:11.011228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.456 [2024-12-05 02:52:11.156231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.456  [2024-12-05T02:52:12.237Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.393 00:07:41.393 02:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.393 02:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733367128 )) 00:07:41.393 02:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.393 02:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733367129 )) 00:07:41.393 02:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.393 [2024-12-05 02:52:12.203193] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:41.393 [2024-12-05 02:52:12.203395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:07:41.651 [2024-12-05 02:52:12.379441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.651 [2024-12-05 02:52:12.460285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.909 [2024-12-05 02:52:12.610616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.909  [2024-12-05T02:52:13.686Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.842 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733367132 )) 00:07:42.842 00:07:42.842 real 0m3.933s 00:07:42.842 user 0m2.317s 00:07:42.842 sys 0m1.635s 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:42.842 ************************************ 00:07:42.842 END TEST dd_flag_noatime 00:07:42.842 ************************************ 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:42.842 ************************************ 00:07:42.842 START TEST dd_flags_misc 00:07:42.842 ************************************ 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.842 02:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:43.101 [2024-12-05 02:52:13.725332] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:43.101 [2024-12-05 02:52:13.725509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61837 ] 00:07:43.101 [2024-12-05 02:52:13.905343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.360 [2024-12-05 02:52:13.989048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.360 [2024-12-05 02:52:14.141030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.617  [2024-12-05T02:52:15.393Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.549 00:07:44.549 02:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7fwlvuluidpzqy9pog847wekxag08a66tlltsfn5xl4bx2gm9ngljh6x3l04ndhd41lgelb4tkuzq5noynoo9jpaxqv58khmjkslrt5h1o77izaa4sreojixykqm3ug32qglimcspr3ax3qfhj7ui7c9i32st1s9i9zetwd8owai7gsqfkc6pd18h3l6lgn4rnmmk9sfm4ozkgmoi4jmj54w9trfgguj1sq9pbkvaimrqrhsnb4fvsv2wq8lkvrxs0wmbp23bnko6txh3b9bmg63r4pjl61ldqbze3ykkaiu822hsvmrwtk8hgqts7d9kqyskpmdl3t5ujyc3v61eicdgyshs77wj1479tt43td3d8e83w5p2u7f0eu5ak19feuwun784koecpey6c712hd3ojjdd0pgmoxtjntc83lbehu2a82kpub4fmcxc8ew1pfvp0shqe4pxz1nbn53ynw8mx92umo2fj4735ocfi5m7eb74plql60oo1iipllz == \7\f\w\l\v\u\l\u\i\d\p\z\q\y\9\p\o\g\8\4\7\w\e\k\x\a\g\0\8\a\6\6\t\l\l\t\s\f\n\5\x\l\4\b\x\2\g\m\9\n\g\l\j\h\6\x\3\l\0\4\n\d\h\d\4\1\l\g\e\l\b\4\t\k\u\z\q\5\n\o\y\n\o\o\9\j\p\a\x\q\v\5\8\k\h\m\j\k\s\l\r\t\5\h\1\o\7\7\i\z\a\a\4\s\r\e\o\j\i\x\y\k\q\m\3\u\g\3\2\q\g\l\i\m\c\s\p\r\3\a\x\3\q\f\h\j\7\u\i\7\c\9\i\3\2\s\t\1\s\9\i\9\z\e\t\w\d\8\o\w\a\i\7\g\s\q\f\k\c\6\p\d\1\8\h\3\l\6\l\g\n\4\r\n\m\m\k\9\s\f\m\4\o\z\k\g\m\o\i\4\j\m\j\5\4\w\9\t\r\f\g\g\u\j\1\s\q\9\p\b\k\v\a\i\m\r\q\r\h\s\n\b\4\f\v\s\v\2\w\q\8\l\k\v\r\x\s\0\w\m\b\p\2\3\b\n\k\o\6\t\x\h\3\b\9\b\m\g\6\3\r\4\p\j\l\6\1\l\d\q\b\z\e\3\y\k\k\a\i\u\8\2\2\h\s\v\m\r\w\t\k\8\h\g\q\t\s\7\d\9\k\q\y\s\k\p\m\d\l\3\t\5\u\j\y\c\3\v\6\1\e\i\c\d\g\y\s\h\s\7\7\w\j\1\4\7\9\t\t\4\3\t\d\3\d\8\e\8\3\w\5\p\2\u\7\f\0\e\u\5\a\k\1\9\f\e\u\w\u\n\7\8\4\k\o\e\c\p\e\y\6\c\7\1\2\h\d\3\o\j\j\d\d\0\p\g\m\o\x\t\j\n\t\c\8\3\l\b\e\h\u\2\a\8\2\k\p\u\b\4\f\m\c\x\c\8\e\w\1\p\f\v\p\0\s\h\q\e\4\p\x\z\1\n\b\n\5\3\y\n\w\8\m\x\9\2\u\m\o\2\f\j\4\7\3\5\o\c\f\i\5\m\7\e\b\7\4\p\l\q\l\6\0\o\o\1\i\i\p\l\l\z ]] 00:07:44.549 02:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.549 02:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:44.549 [2024-12-05 02:52:15.223184] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:44.549 [2024-12-05 02:52:15.223393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61859 ] 00:07:44.807 [2024-12-05 02:52:15.398047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.807 [2024-12-05 02:52:15.491045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.807 [2024-12-05 02:52:15.645137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.063  [2024-12-05T02:52:16.842Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.998 00:07:45.998 02:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7fwlvuluidpzqy9pog847wekxag08a66tlltsfn5xl4bx2gm9ngljh6x3l04ndhd41lgelb4tkuzq5noynoo9jpaxqv58khmjkslrt5h1o77izaa4sreojixykqm3ug32qglimcspr3ax3qfhj7ui7c9i32st1s9i9zetwd8owai7gsqfkc6pd18h3l6lgn4rnmmk9sfm4ozkgmoi4jmj54w9trfgguj1sq9pbkvaimrqrhsnb4fvsv2wq8lkvrxs0wmbp23bnko6txh3b9bmg63r4pjl61ldqbze3ykkaiu822hsvmrwtk8hgqts7d9kqyskpmdl3t5ujyc3v61eicdgyshs77wj1479tt43td3d8e83w5p2u7f0eu5ak19feuwun784koecpey6c712hd3ojjdd0pgmoxtjntc83lbehu2a82kpub4fmcxc8ew1pfvp0shqe4pxz1nbn53ynw8mx92umo2fj4735ocfi5m7eb74plql60oo1iipllz == \7\f\w\l\v\u\l\u\i\d\p\z\q\y\9\p\o\g\8\4\7\w\e\k\x\a\g\0\8\a\6\6\t\l\l\t\s\f\n\5\x\l\4\b\x\2\g\m\9\n\g\l\j\h\6\x\3\l\0\4\n\d\h\d\4\1\l\g\e\l\b\4\t\k\u\z\q\5\n\o\y\n\o\o\9\j\p\a\x\q\v\5\8\k\h\m\j\k\s\l\r\t\5\h\1\o\7\7\i\z\a\a\4\s\r\e\o\j\i\x\y\k\q\m\3\u\g\3\2\q\g\l\i\m\c\s\p\r\3\a\x\3\q\f\h\j\7\u\i\7\c\9\i\3\2\s\t\1\s\9\i\9\z\e\t\w\d\8\o\w\a\i\7\g\s\q\f\k\c\6\p\d\1\8\h\3\l\6\l\g\n\4\r\n\m\m\k\9\s\f\m\4\o\z\k\g\m\o\i\4\j\m\j\5\4\w\9\t\r\f\g\g\u\j\1\s\q\9\p\b\k\v\a\i\m\r\q\r\h\s\n\b\4\f\v\s\v\2\w\q\8\l\k\v\r\x\s\0\w\m\b\p\2\3\b\n\k\o\6\t\x\h\3\b\9\b\m\g\6\3\r\4\p\j\l\6\1\l\d\q\b\z\e\3\y\k\k\a\i\u\8\2\2\h\s\v\m\r\w\t\k\8\h\g\q\t\s\7\d\9\k\q\y\s\k\p\m\d\l\3\t\5\u\j\y\c\3\v\6\1\e\i\c\d\g\y\s\h\s\7\7\w\j\1\4\7\9\t\t\4\3\t\d\3\d\8\e\8\3\w\5\p\2\u\7\f\0\e\u\5\a\k\1\9\f\e\u\w\u\n\7\8\4\k\o\e\c\p\e\y\6\c\7\1\2\h\d\3\o\j\j\d\d\0\p\g\m\o\x\t\j\n\t\c\8\3\l\b\e\h\u\2\a\8\2\k\p\u\b\4\f\m\c\x\c\8\e\w\1\p\f\v\p\0\s\h\q\e\4\p\x\z\1\n\b\n\5\3\y\n\w\8\m\x\9\2\u\m\o\2\f\j\4\7\3\5\o\c\f\i\5\m\7\e\b\7\4\p\l\q\l\6\0\o\o\1\i\i\p\l\l\z ]] 00:07:45.998 02:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.998 02:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:45.998 [2024-12-05 02:52:16.766912] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:45.998 [2024-12-05 02:52:16.767090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61880 ] 00:07:46.257 [2024-12-05 02:52:16.955321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.257 [2024-12-05 02:52:17.078221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.515 [2024-12-05 02:52:17.223740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.515  [2024-12-05T02:52:18.309Z] Copying: 512/512 [B] (average 250 kBps) 00:07:47.465 00:07:47.465 02:52:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7fwlvuluidpzqy9pog847wekxag08a66tlltsfn5xl4bx2gm9ngljh6x3l04ndhd41lgelb4tkuzq5noynoo9jpaxqv58khmjkslrt5h1o77izaa4sreojixykqm3ug32qglimcspr3ax3qfhj7ui7c9i32st1s9i9zetwd8owai7gsqfkc6pd18h3l6lgn4rnmmk9sfm4ozkgmoi4jmj54w9trfgguj1sq9pbkvaimrqrhsnb4fvsv2wq8lkvrxs0wmbp23bnko6txh3b9bmg63r4pjl61ldqbze3ykkaiu822hsvmrwtk8hgqts7d9kqyskpmdl3t5ujyc3v61eicdgyshs77wj1479tt43td3d8e83w5p2u7f0eu5ak19feuwun784koecpey6c712hd3ojjdd0pgmoxtjntc83lbehu2a82kpub4fmcxc8ew1pfvp0shqe4pxz1nbn53ynw8mx92umo2fj4735ocfi5m7eb74plql60oo1iipllz == \7\f\w\l\v\u\l\u\i\d\p\z\q\y\9\p\o\g\8\4\7\w\e\k\x\a\g\0\8\a\6\6\t\l\l\t\s\f\n\5\x\l\4\b\x\2\g\m\9\n\g\l\j\h\6\x\3\l\0\4\n\d\h\d\4\1\l\g\e\l\b\4\t\k\u\z\q\5\n\o\y\n\o\o\9\j\p\a\x\q\v\5\8\k\h\m\j\k\s\l\r\t\5\h\1\o\7\7\i\z\a\a\4\s\r\e\o\j\i\x\y\k\q\m\3\u\g\3\2\q\g\l\i\m\c\s\p\r\3\a\x\3\q\f\h\j\7\u\i\7\c\9\i\3\2\s\t\1\s\9\i\9\z\e\t\w\d\8\o\w\a\i\7\g\s\q\f\k\c\6\p\d\1\8\h\3\l\6\l\g\n\4\r\n\m\m\k\9\s\f\m\4\o\z\k\g\m\o\i\4\j\m\j\5\4\w\9\t\r\f\g\g\u\j\1\s\q\9\p\b\k\v\a\i\m\r\q\r\h\s\n\b\4\f\v\s\v\2\w\q\8\l\k\v\r\x\s\0\w\m\b\p\2\3\b\n\k\o\6\t\x\h\3\b\9\b\m\g\6\3\r\4\p\j\l\6\1\l\d\q\b\z\e\3\y\k\k\a\i\u\8\2\2\h\s\v\m\r\w\t\k\8\h\g\q\t\s\7\d\9\k\q\y\s\k\p\m\d\l\3\t\5\u\j\y\c\3\v\6\1\e\i\c\d\g\y\s\h\s\7\7\w\j\1\4\7\9\t\t\4\3\t\d\3\d\8\e\8\3\w\5\p\2\u\7\f\0\e\u\5\a\k\1\9\f\e\u\w\u\n\7\8\4\k\o\e\c\p\e\y\6\c\7\1\2\h\d\3\o\j\j\d\d\0\p\g\m\o\x\t\j\n\t\c\8\3\l\b\e\h\u\2\a\8\2\k\p\u\b\4\f\m\c\x\c\8\e\w\1\p\f\v\p\0\s\h\q\e\4\p\x\z\1\n\b\n\5\3\y\n\w\8\m\x\9\2\u\m\o\2\f\j\4\7\3\5\o\c\f\i\5\m\7\e\b\7\4\p\l\q\l\6\0\o\o\1\i\i\p\l\l\z ]] 00:07:47.465 02:52:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.465 02:52:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:47.465 [2024-12-05 02:52:18.277129] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:47.466 [2024-12-05 02:52:18.277289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:07:47.749 [2024-12-05 02:52:18.459324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.749 [2024-12-05 02:52:18.560940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.007 [2024-12-05 02:52:18.734239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.007  [2024-12-05T02:52:19.789Z] Copying: 512/512 [B] (average 250 kBps) 00:07:48.945 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7fwlvuluidpzqy9pog847wekxag08a66tlltsfn5xl4bx2gm9ngljh6x3l04ndhd41lgelb4tkuzq5noynoo9jpaxqv58khmjkslrt5h1o77izaa4sreojixykqm3ug32qglimcspr3ax3qfhj7ui7c9i32st1s9i9zetwd8owai7gsqfkc6pd18h3l6lgn4rnmmk9sfm4ozkgmoi4jmj54w9trfgguj1sq9pbkvaimrqrhsnb4fvsv2wq8lkvrxs0wmbp23bnko6txh3b9bmg63r4pjl61ldqbze3ykkaiu822hsvmrwtk8hgqts7d9kqyskpmdl3t5ujyc3v61eicdgyshs77wj1479tt43td3d8e83w5p2u7f0eu5ak19feuwun784koecpey6c712hd3ojjdd0pgmoxtjntc83lbehu2a82kpub4fmcxc8ew1pfvp0shqe4pxz1nbn53ynw8mx92umo2fj4735ocfi5m7eb74plql60oo1iipllz == \7\f\w\l\v\u\l\u\i\d\p\z\q\y\9\p\o\g\8\4\7\w\e\k\x\a\g\0\8\a\6\6\t\l\l\t\s\f\n\5\x\l\4\b\x\2\g\m\9\n\g\l\j\h\6\x\3\l\0\4\n\d\h\d\4\1\l\g\e\l\b\4\t\k\u\z\q\5\n\o\y\n\o\o\9\j\p\a\x\q\v\5\8\k\h\m\j\k\s\l\r\t\5\h\1\o\7\7\i\z\a\a\4\s\r\e\o\j\i\x\y\k\q\m\3\u\g\3\2\q\g\l\i\m\c\s\p\r\3\a\x\3\q\f\h\j\7\u\i\7\c\9\i\3\2\s\t\1\s\9\i\9\z\e\t\w\d\8\o\w\a\i\7\g\s\q\f\k\c\6\p\d\1\8\h\3\l\6\l\g\n\4\r\n\m\m\k\9\s\f\m\4\o\z\k\g\m\o\i\4\j\m\j\5\4\w\9\t\r\f\g\g\u\j\1\s\q\9\p\b\k\v\a\i\m\r\q\r\h\s\n\b\4\f\v\s\v\2\w\q\8\l\k\v\r\x\s\0\w\m\b\p\2\3\b\n\k\o\6\t\x\h\3\b\9\b\m\g\6\3\r\4\p\j\l\6\1\l\d\q\b\z\e\3\y\k\k\a\i\u\8\2\2\h\s\v\m\r\w\t\k\8\h\g\q\t\s\7\d\9\k\q\y\s\k\p\m\d\l\3\t\5\u\j\y\c\3\v\6\1\e\i\c\d\g\y\s\h\s\7\7\w\j\1\4\7\9\t\t\4\3\t\d\3\d\8\e\8\3\w\5\p\2\u\7\f\0\e\u\5\a\k\1\9\f\e\u\w\u\n\7\8\4\k\o\e\c\p\e\y\6\c\7\1\2\h\d\3\o\j\j\d\d\0\p\g\m\o\x\t\j\n\t\c\8\3\l\b\e\h\u\2\a\8\2\k\p\u\b\4\f\m\c\x\c\8\e\w\1\p\f\v\p\0\s\h\q\e\4\p\x\z\1\n\b\n\5\3\y\n\w\8\m\x\9\2\u\m\o\2\f\j\4\7\3\5\o\c\f\i\5\m\7\e\b\7\4\p\l\q\l\6\0\o\o\1\i\i\p\l\l\z ]] 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.945 02:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:49.204 [2024-12-05 02:52:19.819670] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:49.204 [2024-12-05 02:52:19.819859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61923 ] 00:07:49.204 [2024-12-05 02:52:19.997340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.462 [2024-12-05 02:52:20.089468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.462 [2024-12-05 02:52:20.235049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.721  [2024-12-05T02:52:21.503Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.659 00:07:50.659 02:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa9g15ouxakdfmrc7da2a85q91sfs8hj04zkxv7vvo1yzbguj1afkgkf32m3q99w2kvdksftgcy17qs8dottcexmq3l0y1ya11796dkzf4pffi013xn0tqa8uzb6sxdv5d86n5c6q9xh6nfktf2hhxs2xakdxs5bp379irsqaw40odkv8im2wtqy57ba3m1r1wvqac3cloqytq5i0w774o5mlh0x945oq3zkidiay9uqi5rzhnemw9kucyou2v2dx8ee2me1gdo44t1xmp7epv4si9cd4nzlx0eae1aw60ym5nnx3q26prrnjfmbvlj94nluecj9r848p5jfrnfmwbxvfu33du5h9zhkqcnx7j2swrs688umndiyza2a8oz0khtjrqgibt2s34idz1levcgpo9b4ysj73klu6nmr34lsmlzee2eimtu84ifwck0c58rq5ut5ybo6uoxw8cr3obiks1r3d4ycecvdlejbbddbkfgm1bofg56zi1yhsut7 == \w\a\9\g\1\5\o\u\x\a\k\d\f\m\r\c\7\d\a\2\a\8\5\q\9\1\s\f\s\8\h\j\0\4\z\k\x\v\7\v\v\o\1\y\z\b\g\u\j\1\a\f\k\g\k\f\3\2\m\3\q\9\9\w\2\k\v\d\k\s\f\t\g\c\y\1\7\q\s\8\d\o\t\t\c\e\x\m\q\3\l\0\y\1\y\a\1\1\7\9\6\d\k\z\f\4\p\f\f\i\0\1\3\x\n\0\t\q\a\8\u\z\b\6\s\x\d\v\5\d\8\6\n\5\c\6\q\9\x\h\6\n\f\k\t\f\2\h\h\x\s\2\x\a\k\d\x\s\5\b\p\3\7\9\i\r\s\q\a\w\4\0\o\d\k\v\8\i\m\2\w\t\q\y\5\7\b\a\3\m\1\r\1\w\v\q\a\c\3\c\l\o\q\y\t\q\5\i\0\w\7\7\4\o\5\m\l\h\0\x\9\4\5\o\q\3\z\k\i\d\i\a\y\9\u\q\i\5\r\z\h\n\e\m\w\9\k\u\c\y\o\u\2\v\2\d\x\8\e\e\2\m\e\1\g\d\o\4\4\t\1\x\m\p\7\e\p\v\4\s\i\9\c\d\4\n\z\l\x\0\e\a\e\1\a\w\6\0\y\m\5\n\n\x\3\q\2\6\p\r\r\n\j\f\m\b\v\l\j\9\4\n\l\u\e\c\j\9\r\8\4\8\p\5\j\f\r\n\f\m\w\b\x\v\f\u\3\3\d\u\5\h\9\z\h\k\q\c\n\x\7\j\2\s\w\r\s\6\8\8\u\m\n\d\i\y\z\a\2\a\8\o\z\0\k\h\t\j\r\q\g\i\b\t\2\s\3\4\i\d\z\1\l\e\v\c\g\p\o\9\b\4\y\s\j\7\3\k\l\u\6\n\m\r\3\4\l\s\m\l\z\e\e\2\e\i\m\t\u\8\4\i\f\w\c\k\0\c\5\8\r\q\5\u\t\5\y\b\o\6\u\o\x\w\8\c\r\3\o\b\i\k\s\1\r\3\d\4\y\c\e\c\v\d\l\e\j\b\b\d\d\b\k\f\g\m\1\b\o\f\g\5\6\z\i\1\y\h\s\u\t\7 ]] 00:07:50.659 02:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.659 02:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:50.659 [2024-12-05 02:52:21.308586] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:50.659 [2024-12-05 02:52:21.308780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:07:50.659 [2024-12-05 02:52:21.486328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.918 [2024-12-05 02:52:21.578377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.919 [2024-12-05 02:52:21.733248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.177  [2024-12-05T02:52:22.959Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.115 00:07:52.115 02:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa9g15ouxakdfmrc7da2a85q91sfs8hj04zkxv7vvo1yzbguj1afkgkf32m3q99w2kvdksftgcy17qs8dottcexmq3l0y1ya11796dkzf4pffi013xn0tqa8uzb6sxdv5d86n5c6q9xh6nfktf2hhxs2xakdxs5bp379irsqaw40odkv8im2wtqy57ba3m1r1wvqac3cloqytq5i0w774o5mlh0x945oq3zkidiay9uqi5rzhnemw9kucyou2v2dx8ee2me1gdo44t1xmp7epv4si9cd4nzlx0eae1aw60ym5nnx3q26prrnjfmbvlj94nluecj9r848p5jfrnfmwbxvfu33du5h9zhkqcnx7j2swrs688umndiyza2a8oz0khtjrqgibt2s34idz1levcgpo9b4ysj73klu6nmr34lsmlzee2eimtu84ifwck0c58rq5ut5ybo6uoxw8cr3obiks1r3d4ycecvdlejbbddbkfgm1bofg56zi1yhsut7 == \w\a\9\g\1\5\o\u\x\a\k\d\f\m\r\c\7\d\a\2\a\8\5\q\9\1\s\f\s\8\h\j\0\4\z\k\x\v\7\v\v\o\1\y\z\b\g\u\j\1\a\f\k\g\k\f\3\2\m\3\q\9\9\w\2\k\v\d\k\s\f\t\g\c\y\1\7\q\s\8\d\o\t\t\c\e\x\m\q\3\l\0\y\1\y\a\1\1\7\9\6\d\k\z\f\4\p\f\f\i\0\1\3\x\n\0\t\q\a\8\u\z\b\6\s\x\d\v\5\d\8\6\n\5\c\6\q\9\x\h\6\n\f\k\t\f\2\h\h\x\s\2\x\a\k\d\x\s\5\b\p\3\7\9\i\r\s\q\a\w\4\0\o\d\k\v\8\i\m\2\w\t\q\y\5\7\b\a\3\m\1\r\1\w\v\q\a\c\3\c\l\o\q\y\t\q\5\i\0\w\7\7\4\o\5\m\l\h\0\x\9\4\5\o\q\3\z\k\i\d\i\a\y\9\u\q\i\5\r\z\h\n\e\m\w\9\k\u\c\y\o\u\2\v\2\d\x\8\e\e\2\m\e\1\g\d\o\4\4\t\1\x\m\p\7\e\p\v\4\s\i\9\c\d\4\n\z\l\x\0\e\a\e\1\a\w\6\0\y\m\5\n\n\x\3\q\2\6\p\r\r\n\j\f\m\b\v\l\j\9\4\n\l\u\e\c\j\9\r\8\4\8\p\5\j\f\r\n\f\m\w\b\x\v\f\u\3\3\d\u\5\h\9\z\h\k\q\c\n\x\7\j\2\s\w\r\s\6\8\8\u\m\n\d\i\y\z\a\2\a\8\o\z\0\k\h\t\j\r\q\g\i\b\t\2\s\3\4\i\d\z\1\l\e\v\c\g\p\o\9\b\4\y\s\j\7\3\k\l\u\6\n\m\r\3\4\l\s\m\l\z\e\e\2\e\i\m\t\u\8\4\i\f\w\c\k\0\c\5\8\r\q\5\u\t\5\y\b\o\6\u\o\x\w\8\c\r\3\o\b\i\k\s\1\r\3\d\4\y\c\e\c\v\d\l\e\j\b\b\d\d\b\k\f\g\m\1\b\o\f\g\5\6\z\i\1\y\h\s\u\t\7 ]] 00:07:52.115 02:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.115 02:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:52.115 [2024-12-05 02:52:22.814945] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:52.115 [2024-12-05 02:52:22.815114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61966 ] 00:07:52.374 [2024-12-05 02:52:22.991327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.374 [2024-12-05 02:52:23.083384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.633 [2024-12-05 02:52:23.238857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.633  [2024-12-05T02:52:24.414Z] Copying: 512/512 [B] (average 125 kBps) 00:07:53.570 00:07:53.571 02:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa9g15ouxakdfmrc7da2a85q91sfs8hj04zkxv7vvo1yzbguj1afkgkf32m3q99w2kvdksftgcy17qs8dottcexmq3l0y1ya11796dkzf4pffi013xn0tqa8uzb6sxdv5d86n5c6q9xh6nfktf2hhxs2xakdxs5bp379irsqaw40odkv8im2wtqy57ba3m1r1wvqac3cloqytq5i0w774o5mlh0x945oq3zkidiay9uqi5rzhnemw9kucyou2v2dx8ee2me1gdo44t1xmp7epv4si9cd4nzlx0eae1aw60ym5nnx3q26prrnjfmbvlj94nluecj9r848p5jfrnfmwbxvfu33du5h9zhkqcnx7j2swrs688umndiyza2a8oz0khtjrqgibt2s34idz1levcgpo9b4ysj73klu6nmr34lsmlzee2eimtu84ifwck0c58rq5ut5ybo6uoxw8cr3obiks1r3d4ycecvdlejbbddbkfgm1bofg56zi1yhsut7 == \w\a\9\g\1\5\o\u\x\a\k\d\f\m\r\c\7\d\a\2\a\8\5\q\9\1\s\f\s\8\h\j\0\4\z\k\x\v\7\v\v\o\1\y\z\b\g\u\j\1\a\f\k\g\k\f\3\2\m\3\q\9\9\w\2\k\v\d\k\s\f\t\g\c\y\1\7\q\s\8\d\o\t\t\c\e\x\m\q\3\l\0\y\1\y\a\1\1\7\9\6\d\k\z\f\4\p\f\f\i\0\1\3\x\n\0\t\q\a\8\u\z\b\6\s\x\d\v\5\d\8\6\n\5\c\6\q\9\x\h\6\n\f\k\t\f\2\h\h\x\s\2\x\a\k\d\x\s\5\b\p\3\7\9\i\r\s\q\a\w\4\0\o\d\k\v\8\i\m\2\w\t\q\y\5\7\b\a\3\m\1\r\1\w\v\q\a\c\3\c\l\o\q\y\t\q\5\i\0\w\7\7\4\o\5\m\l\h\0\x\9\4\5\o\q\3\z\k\i\d\i\a\y\9\u\q\i\5\r\z\h\n\e\m\w\9\k\u\c\y\o\u\2\v\2\d\x\8\e\e\2\m\e\1\g\d\o\4\4\t\1\x\m\p\7\e\p\v\4\s\i\9\c\d\4\n\z\l\x\0\e\a\e\1\a\w\6\0\y\m\5\n\n\x\3\q\2\6\p\r\r\n\j\f\m\b\v\l\j\9\4\n\l\u\e\c\j\9\r\8\4\8\p\5\j\f\r\n\f\m\w\b\x\v\f\u\3\3\d\u\5\h\9\z\h\k\q\c\n\x\7\j\2\s\w\r\s\6\8\8\u\m\n\d\i\y\z\a\2\a\8\o\z\0\k\h\t\j\r\q\g\i\b\t\2\s\3\4\i\d\z\1\l\e\v\c\g\p\o\9\b\4\y\s\j\7\3\k\l\u\6\n\m\r\3\4\l\s\m\l\z\e\e\2\e\i\m\t\u\8\4\i\f\w\c\k\0\c\5\8\r\q\5\u\t\5\y\b\o\6\u\o\x\w\8\c\r\3\o\b\i\k\s\1\r\3\d\4\y\c\e\c\v\d\l\e\j\b\b\d\d\b\k\f\g\m\1\b\o\f\g\5\6\z\i\1\y\h\s\u\t\7 ]] 00:07:53.571 02:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.571 02:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:53.571 [2024-12-05 02:52:24.302566] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:53.571 [2024-12-05 02:52:24.302733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:07:53.829 [2024-12-05 02:52:24.467717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.830 [2024-12-05 02:52:24.560700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.088 [2024-12-05 02:52:24.724719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.088  [2024-12-05T02:52:25.869Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.025 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa9g15ouxakdfmrc7da2a85q91sfs8hj04zkxv7vvo1yzbguj1afkgkf32m3q99w2kvdksftgcy17qs8dottcexmq3l0y1ya11796dkzf4pffi013xn0tqa8uzb6sxdv5d86n5c6q9xh6nfktf2hhxs2xakdxs5bp379irsqaw40odkv8im2wtqy57ba3m1r1wvqac3cloqytq5i0w774o5mlh0x945oq3zkidiay9uqi5rzhnemw9kucyou2v2dx8ee2me1gdo44t1xmp7epv4si9cd4nzlx0eae1aw60ym5nnx3q26prrnjfmbvlj94nluecj9r848p5jfrnfmwbxvfu33du5h9zhkqcnx7j2swrs688umndiyza2a8oz0khtjrqgibt2s34idz1levcgpo9b4ysj73klu6nmr34lsmlzee2eimtu84ifwck0c58rq5ut5ybo6uoxw8cr3obiks1r3d4ycecvdlejbbddbkfgm1bofg56zi1yhsut7 == \w\a\9\g\1\5\o\u\x\a\k\d\f\m\r\c\7\d\a\2\a\8\5\q\9\1\s\f\s\8\h\j\0\4\z\k\x\v\7\v\v\o\1\y\z\b\g\u\j\1\a\f\k\g\k\f\3\2\m\3\q\9\9\w\2\k\v\d\k\s\f\t\g\c\y\1\7\q\s\8\d\o\t\t\c\e\x\m\q\3\l\0\y\1\y\a\1\1\7\9\6\d\k\z\f\4\p\f\f\i\0\1\3\x\n\0\t\q\a\8\u\z\b\6\s\x\d\v\5\d\8\6\n\5\c\6\q\9\x\h\6\n\f\k\t\f\2\h\h\x\s\2\x\a\k\d\x\s\5\b\p\3\7\9\i\r\s\q\a\w\4\0\o\d\k\v\8\i\m\2\w\t\q\y\5\7\b\a\3\m\1\r\1\w\v\q\a\c\3\c\l\o\q\y\t\q\5\i\0\w\7\7\4\o\5\m\l\h\0\x\9\4\5\o\q\3\z\k\i\d\i\a\y\9\u\q\i\5\r\z\h\n\e\m\w\9\k\u\c\y\o\u\2\v\2\d\x\8\e\e\2\m\e\1\g\d\o\4\4\t\1\x\m\p\7\e\p\v\4\s\i\9\c\d\4\n\z\l\x\0\e\a\e\1\a\w\6\0\y\m\5\n\n\x\3\q\2\6\p\r\r\n\j\f\m\b\v\l\j\9\4\n\l\u\e\c\j\9\r\8\4\8\p\5\j\f\r\n\f\m\w\b\x\v\f\u\3\3\d\u\5\h\9\z\h\k\q\c\n\x\7\j\2\s\w\r\s\6\8\8\u\m\n\d\i\y\z\a\2\a\8\o\z\0\k\h\t\j\r\q\g\i\b\t\2\s\3\4\i\d\z\1\l\e\v\c\g\p\o\9\b\4\y\s\j\7\3\k\l\u\6\n\m\r\3\4\l\s\m\l\z\e\e\2\e\i\m\t\u\8\4\i\f\w\c\k\0\c\5\8\r\q\5\u\t\5\y\b\o\6\u\o\x\w\8\c\r\3\o\b\i\k\s\1\r\3\d\4\y\c\e\c\v\d\l\e\j\b\b\d\d\b\k\f\g\m\1\b\o\f\g\5\6\z\i\1\y\h\s\u\t\7 ]] 00:07:55.025 00:07:55.025 real 0m12.101s 00:07:55.025 user 0m9.666s 00:07:55.025 sys 0m6.722s 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.025 ************************************ 00:07:55.025 END TEST dd_flags_misc 00:07:55.025 ************************************ 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:55.025 * Second test run, disabling liburing, forcing AIO 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.025 ************************************ 00:07:55.025 START TEST dd_flag_append_forced_aio 00:07:55.025 ************************************ 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=xlzhhycxvyg755deoe3sgszr61q51vx4 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=hnhf5ha9kvbe5g3m0qmw2draatlil0tk 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s xlzhhycxvyg755deoe3sgszr61q51vx4 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s hnhf5ha9kvbe5g3m0qmw2draatlil0tk 00:07:55.025 02:52:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:55.284 [2024-12-05 02:52:25.878750] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:55.285 [2024-12-05 02:52:25.878949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62027 ] 00:07:55.285 [2024-12-05 02:52:26.060132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.543 [2024-12-05 02:52:26.155817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.543 [2024-12-05 02:52:26.312213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.802  [2024-12-05T02:52:27.584Z] Copying: 32/32 [B] (average 31 kBps) 00:07:56.740 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ hnhf5ha9kvbe5g3m0qmw2draatlil0tkxlzhhycxvyg755deoe3sgszr61q51vx4 == \h\n\h\f\5\h\a\9\k\v\b\e\5\g\3\m\0\q\m\w\2\d\r\a\a\t\l\i\l\0\t\k\x\l\z\h\h\y\c\x\v\y\g\7\5\5\d\e\o\e\3\s\g\s\z\r\6\1\q\5\1\v\x\4 ]] 00:07:56.740 00:07:56.740 real 0m1.479s 00:07:56.740 user 0m1.162s 00:07:56.740 sys 0m0.195s 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.740 ************************************ 00:07:56.740 END TEST dd_flag_append_forced_aio 00:07:56.740 ************************************ 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.740 ************************************ 00:07:56.740 START TEST dd_flag_directory_forced_aio 00:07:56.740 ************************************ 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.740 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.741 02:52:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.741 [2024-12-05 02:52:27.412624] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:56.741 [2024-12-05 02:52:27.412846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62067 ] 00:07:57.000 [2024-12-05 02:52:27.595076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.000 [2024-12-05 02:52:27.684050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.000 [2024-12-05 02:52:27.839901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.260 [2024-12-05 02:52:27.935134] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:57.260 [2024-12-05 02:52:27.935186] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:57.260 [2024-12-05 02:52:27.935209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.829 [2024-12-05 02:52:28.530730] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.088 02:52:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.088 [2024-12-05 02:52:28.874898] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:58.088 [2024-12-05 02:52:28.875074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:07:58.347 [2024-12-05 02:52:29.055188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.347 [2024-12-05 02:52:29.142219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.606 [2024-12-05 02:52:29.309740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.606 [2024-12-05 02:52:29.404248] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.606 [2024-12-05 02:52:29.404322] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.606 [2024-12-05 02:52:29.404346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.553 [2024-12-05 02:52:30.053298] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.553 00:07:59.553 real 0m2.998s 00:07:59.553 user 0m2.370s 00:07:59.553 sys 0m0.407s 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:59.553 ************************************ 00:07:59.553 END TEST dd_flag_directory_forced_aio 00:07:59.553 ************************************ 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:59.553 ************************************ 00:07:59.553 START TEST dd_flag_nofollow_forced_aio 00:07:59.553 ************************************ 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.553 02:52:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.829 [2024-12-05 02:52:30.494559] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:59.829 [2024-12-05 02:52:30.494841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:08:00.086 [2024-12-05 02:52:30.686279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.086 [2024-12-05 02:52:30.811680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.344 [2024-12-05 02:52:30.978303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.344 [2024-12-05 02:52:31.075846] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:00.344 [2024-12-05 02:52:31.075917] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:00.344 [2024-12-05 02:52:31.075941] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.911 [2024-12-05 02:52:31.699559] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.171 02:52:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:01.430 [2024-12-05 02:52:32.044059] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:01.430 [2024-12-05 02:52:32.044225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:08:01.430 [2024-12-05 02:52:32.226886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.691 [2024-12-05 02:52:32.323603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.691 [2024-12-05 02:52:32.487465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.950 [2024-12-05 02:52:32.578352] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:01.950 [2024-12-05 02:52:32.578417] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:01.950 [2024-12-05 02:52:32.578457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.518 [2024-12-05 02:52:33.172031] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.777 02:52:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.777 [2024-12-05 02:52:33.533668] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:02.777 [2024-12-05 02:52:33.533870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62170 ] 00:08:03.036 [2024-12-05 02:52:33.714004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.036 [2024-12-05 02:52:33.795996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.295 [2024-12-05 02:52:33.945301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.295  [2024-12-05T02:52:35.076Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.232 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ i8dvovopkvfv8r101evkit2iu1czqrkc6svgj0svaqqhfb1p0wnlfttvuujycpbvkgjigeb1iqo2213zdqqid56z453l9tkrby9a44d588q448rw8fnqysy7ks16iaaq92jbnn51uoeyuh8wbo0sg4nfmi5irf45ib9oam6fhhlrh1erfq32eemeykc93gakhbpwcqtzyn6sfiq8qsco7jl170et9hb6vau70ogjgqhgo86e3xn4b237swc3b5cln3xtbvu91w4t7f3pipj6wuusgoa04awb1tq3oqmebv87k49cpp7o4pqk3xub4ft0oocad9eitrjov09tu65nzulpqnqcv8bwk5hi0x586y04fmykrwsfpz4yc9tyiprv4fes6ostnbhkpdz3uljn4dg9o60k57niglson9h5d8sb3qujuljvfbqp3cfdv6lcm16qh7lbee1gth74z4zgohyvo5daig0lrbln0uy92jrlwypb2wff5zkgsfywb4ws == \i\8\d\v\o\v\o\p\k\v\f\v\8\r\1\0\1\e\v\k\i\t\2\i\u\1\c\z\q\r\k\c\6\s\v\g\j\0\s\v\a\q\q\h\f\b\1\p\0\w\n\l\f\t\t\v\u\u\j\y\c\p\b\v\k\g\j\i\g\e\b\1\i\q\o\2\2\1\3\z\d\q\q\i\d\5\6\z\4\5\3\l\9\t\k\r\b\y\9\a\4\4\d\5\8\8\q\4\4\8\r\w\8\f\n\q\y\s\y\7\k\s\1\6\i\a\a\q\9\2\j\b\n\n\5\1\u\o\e\y\u\h\8\w\b\o\0\s\g\4\n\f\m\i\5\i\r\f\4\5\i\b\9\o\a\m\6\f\h\h\l\r\h\1\e\r\f\q\3\2\e\e\m\e\y\k\c\9\3\g\a\k\h\b\p\w\c\q\t\z\y\n\6\s\f\i\q\8\q\s\c\o\7\j\l\1\7\0\e\t\9\h\b\6\v\a\u\7\0\o\g\j\g\q\h\g\o\8\6\e\3\x\n\4\b\2\3\7\s\w\c\3\b\5\c\l\n\3\x\t\b\v\u\9\1\w\4\t\7\f\3\p\i\p\j\6\w\u\u\s\g\o\a\0\4\a\w\b\1\t\q\3\o\q\m\e\b\v\8\7\k\4\9\c\p\p\7\o\4\p\q\k\3\x\u\b\4\f\t\0\o\o\c\a\d\9\e\i\t\r\j\o\v\0\9\t\u\6\5\n\z\u\l\p\q\n\q\c\v\8\b\w\k\5\h\i\0\x\5\8\6\y\0\4\f\m\y\k\r\w\s\f\p\z\4\y\c\9\t\y\i\p\r\v\4\f\e\s\6\o\s\t\n\b\h\k\p\d\z\3\u\l\j\n\4\d\g\9\o\6\0\k\5\7\n\i\g\l\s\o\n\9\h\5\d\8\s\b\3\q\u\j\u\l\j\v\f\b\q\p\3\c\f\d\v\6\l\c\m\1\6\q\h\7\l\b\e\e\1\g\t\h\7\4\z\4\z\g\o\h\y\v\o\5\d\a\i\g\0\l\r\b\l\n\0\u\y\9\2\j\r\l\w\y\p\b\2\w\f\f\5\z\k\g\s\f\y\w\b\4\w\s ]] 00:08:04.232 00:08:04.232 real 0m4.533s 00:08:04.232 user 0m3.598s 00:08:04.232 sys 0m0.589s 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.232 ************************************ 00:08:04.232 END TEST dd_flag_nofollow_forced_aio 00:08:04.232 ************************************ 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.232 02:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:04.232 ************************************ 00:08:04.232 START TEST dd_flag_noatime_forced_aio 00:08:04.233 ************************************ 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733367154 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733367154 00:08:04.233 02:52:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:05.167 02:52:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.426 [2024-12-05 02:52:36.084546] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:05.426 [2024-12-05 02:52:36.084724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:08:05.426 [2024-12-05 02:52:36.261927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.685 [2024-12-05 02:52:36.351691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.685 [2024-12-05 02:52:36.508720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.944  [2024-12-05T02:52:37.726Z] Copying: 512/512 [B] (average 500 kBps) 00:08:06.882 00:08:06.882 02:52:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.882 02:52:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733367154 )) 00:08:06.882 02:52:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.882 02:52:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733367154 )) 00:08:06.882 02:52:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.882 [2024-12-05 02:52:37.590589] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:06.882 [2024-12-05 02:52:37.590798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:08:07.140 [2024-12-05 02:52:37.771777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.140 [2024-12-05 02:52:37.864024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.399 [2024-12-05 02:52:38.010506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.399  [2024-12-05T02:52:39.180Z] Copying: 512/512 [B] (average 500 kBps) 00:08:08.336 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733367158 )) 00:08:08.336 00:08:08.336 real 0m4.006s 00:08:08.336 user 0m2.344s 00:08:08.336 sys 0m0.414s 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 ************************************ 00:08:08.336 END TEST dd_flag_noatime_forced_aio 00:08:08.336 ************************************ 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.336 02:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 ************************************ 00:08:08.336 START TEST dd_flags_misc_forced_aio 00:08:08.336 ************************************ 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:08.336 02:52:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:08.336 [2024-12-05 02:52:39.123829] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:08.336 [2024-12-05 02:52:39.124025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62279 ] 00:08:08.596 [2024-12-05 02:52:39.307935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.596 [2024-12-05 02:52:39.426774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.855 [2024-12-05 02:52:39.616391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.114  [2024-12-05T02:52:40.894Z] Copying: 512/512 [B] (average 500 kBps) 00:08:10.050 00:08:10.050 02:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ beqbpk5oacge1fdue3eqs7476quo1h3030dorw9hqaixpoygc8kh16qrp0dlginhax4wcuaxx7jg7p9qttn30yt2u9dzqu12wqx4u15q3j19yru0c8xgl0g6nxl2zecbuwajjx7yb1hmll13kflabbdhix4efr5hgcat8vsx9upplhe5ctqegfobiyctlj2be0dokumawt53vtjwyer0h3f7h3tgwcjpfadfyo0x2jagid54hsf17g7r10q8fc3ya37xhrzwdzehrmmfvw80jyo1v4tpe7dcpcin4p0hxqpchi76r0fzz99z2wirh1fqb5ihu6rhssyoovz8o839z6hbbp67g1vgm2rbubfcfo851awn89idy4ebatjua980v6ewo34tep7hco53dje2texw6kdorci4kdttky8uulzkvprfbwzyc6y95hse27hcsjryofg3470k6cla0bvqelzzccdai494m4f8ofb5oujai1vgpju5ei0vr98lgn8h == \b\e\q\b\p\k\5\o\a\c\g\e\1\f\d\u\e\3\e\q\s\7\4\7\6\q\u\o\1\h\3\0\3\0\d\o\r\w\9\h\q\a\i\x\p\o\y\g\c\8\k\h\1\6\q\r\p\0\d\l\g\i\n\h\a\x\4\w\c\u\a\x\x\7\j\g\7\p\9\q\t\t\n\3\0\y\t\2\u\9\d\z\q\u\1\2\w\q\x\4\u\1\5\q\3\j\1\9\y\r\u\0\c\8\x\g\l\0\g\6\n\x\l\2\z\e\c\b\u\w\a\j\j\x\7\y\b\1\h\m\l\l\1\3\k\f\l\a\b\b\d\h\i\x\4\e\f\r\5\h\g\c\a\t\8\v\s\x\9\u\p\p\l\h\e\5\c\t\q\e\g\f\o\b\i\y\c\t\l\j\2\b\e\0\d\o\k\u\m\a\w\t\5\3\v\t\j\w\y\e\r\0\h\3\f\7\h\3\t\g\w\c\j\p\f\a\d\f\y\o\0\x\2\j\a\g\i\d\5\4\h\s\f\1\7\g\7\r\1\0\q\8\f\c\3\y\a\3\7\x\h\r\z\w\d\z\e\h\r\m\m\f\v\w\8\0\j\y\o\1\v\4\t\p\e\7\d\c\p\c\i\n\4\p\0\h\x\q\p\c\h\i\7\6\r\0\f\z\z\9\9\z\2\w\i\r\h\1\f\q\b\5\i\h\u\6\r\h\s\s\y\o\o\v\z\8\o\8\3\9\z\6\h\b\b\p\6\7\g\1\v\g\m\2\r\b\u\b\f\c\f\o\8\5\1\a\w\n\8\9\i\d\y\4\e\b\a\t\j\u\a\9\8\0\v\6\e\w\o\3\4\t\e\p\7\h\c\o\5\3\d\j\e\2\t\e\x\w\6\k\d\o\r\c\i\4\k\d\t\t\k\y\8\u\u\l\z\k\v\p\r\f\b\w\z\y\c\6\y\9\5\h\s\e\2\7\h\c\s\j\r\y\o\f\g\3\4\7\0\k\6\c\l\a\0\b\v\q\e\l\z\z\c\c\d\a\i\4\9\4\m\4\f\8\o\f\b\5\o\u\j\a\i\1\v\g\p\j\u\5\e\i\0\v\r\9\8\l\g\n\8\h ]] 00:08:10.050 02:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.050 02:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:10.050 [2024-12-05 02:52:40.706668] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:10.050 [2024-12-05 02:52:40.706897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:08:10.050 [2024-12-05 02:52:40.883127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.308 [2024-12-05 02:52:40.980300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.308 [2024-12-05 02:52:41.130989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.567  [2024-12-05T02:52:42.353Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.509 00:08:11.509 02:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ beqbpk5oacge1fdue3eqs7476quo1h3030dorw9hqaixpoygc8kh16qrp0dlginhax4wcuaxx7jg7p9qttn30yt2u9dzqu12wqx4u15q3j19yru0c8xgl0g6nxl2zecbuwajjx7yb1hmll13kflabbdhix4efr5hgcat8vsx9upplhe5ctqegfobiyctlj2be0dokumawt53vtjwyer0h3f7h3tgwcjpfadfyo0x2jagid54hsf17g7r10q8fc3ya37xhrzwdzehrmmfvw80jyo1v4tpe7dcpcin4p0hxqpchi76r0fzz99z2wirh1fqb5ihu6rhssyoovz8o839z6hbbp67g1vgm2rbubfcfo851awn89idy4ebatjua980v6ewo34tep7hco53dje2texw6kdorci4kdttky8uulzkvprfbwzyc6y95hse27hcsjryofg3470k6cla0bvqelzzccdai494m4f8ofb5oujai1vgpju5ei0vr98lgn8h == \b\e\q\b\p\k\5\o\a\c\g\e\1\f\d\u\e\3\e\q\s\7\4\7\6\q\u\o\1\h\3\0\3\0\d\o\r\w\9\h\q\a\i\x\p\o\y\g\c\8\k\h\1\6\q\r\p\0\d\l\g\i\n\h\a\x\4\w\c\u\a\x\x\7\j\g\7\p\9\q\t\t\n\3\0\y\t\2\u\9\d\z\q\u\1\2\w\q\x\4\u\1\5\q\3\j\1\9\y\r\u\0\c\8\x\g\l\0\g\6\n\x\l\2\z\e\c\b\u\w\a\j\j\x\7\y\b\1\h\m\l\l\1\3\k\f\l\a\b\b\d\h\i\x\4\e\f\r\5\h\g\c\a\t\8\v\s\x\9\u\p\p\l\h\e\5\c\t\q\e\g\f\o\b\i\y\c\t\l\j\2\b\e\0\d\o\k\u\m\a\w\t\5\3\v\t\j\w\y\e\r\0\h\3\f\7\h\3\t\g\w\c\j\p\f\a\d\f\y\o\0\x\2\j\a\g\i\d\5\4\h\s\f\1\7\g\7\r\1\0\q\8\f\c\3\y\a\3\7\x\h\r\z\w\d\z\e\h\r\m\m\f\v\w\8\0\j\y\o\1\v\4\t\p\e\7\d\c\p\c\i\n\4\p\0\h\x\q\p\c\h\i\7\6\r\0\f\z\z\9\9\z\2\w\i\r\h\1\f\q\b\5\i\h\u\6\r\h\s\s\y\o\o\v\z\8\o\8\3\9\z\6\h\b\b\p\6\7\g\1\v\g\m\2\r\b\u\b\f\c\f\o\8\5\1\a\w\n\8\9\i\d\y\4\e\b\a\t\j\u\a\9\8\0\v\6\e\w\o\3\4\t\e\p\7\h\c\o\5\3\d\j\e\2\t\e\x\w\6\k\d\o\r\c\i\4\k\d\t\t\k\y\8\u\u\l\z\k\v\p\r\f\b\w\z\y\c\6\y\9\5\h\s\e\2\7\h\c\s\j\r\y\o\f\g\3\4\7\0\k\6\c\l\a\0\b\v\q\e\l\z\z\c\c\d\a\i\4\9\4\m\4\f\8\o\f\b\5\o\u\j\a\i\1\v\g\p\j\u\5\e\i\0\v\r\9\8\l\g\n\8\h ]] 00:08:11.509 02:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.509 02:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:11.509 [2024-12-05 02:52:42.209029] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:11.509 [2024-12-05 02:52:42.209222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62318 ] 00:08:11.769 [2024-12-05 02:52:42.386913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.769 [2024-12-05 02:52:42.470606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.028 [2024-12-05 02:52:42.624338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.028  [2024-12-05T02:52:43.814Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.970 00:08:12.970 02:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ beqbpk5oacge1fdue3eqs7476quo1h3030dorw9hqaixpoygc8kh16qrp0dlginhax4wcuaxx7jg7p9qttn30yt2u9dzqu12wqx4u15q3j19yru0c8xgl0g6nxl2zecbuwajjx7yb1hmll13kflabbdhix4efr5hgcat8vsx9upplhe5ctqegfobiyctlj2be0dokumawt53vtjwyer0h3f7h3tgwcjpfadfyo0x2jagid54hsf17g7r10q8fc3ya37xhrzwdzehrmmfvw80jyo1v4tpe7dcpcin4p0hxqpchi76r0fzz99z2wirh1fqb5ihu6rhssyoovz8o839z6hbbp67g1vgm2rbubfcfo851awn89idy4ebatjua980v6ewo34tep7hco53dje2texw6kdorci4kdttky8uulzkvprfbwzyc6y95hse27hcsjryofg3470k6cla0bvqelzzccdai494m4f8ofb5oujai1vgpju5ei0vr98lgn8h == \b\e\q\b\p\k\5\o\a\c\g\e\1\f\d\u\e\3\e\q\s\7\4\7\6\q\u\o\1\h\3\0\3\0\d\o\r\w\9\h\q\a\i\x\p\o\y\g\c\8\k\h\1\6\q\r\p\0\d\l\g\i\n\h\a\x\4\w\c\u\a\x\x\7\j\g\7\p\9\q\t\t\n\3\0\y\t\2\u\9\d\z\q\u\1\2\w\q\x\4\u\1\5\q\3\j\1\9\y\r\u\0\c\8\x\g\l\0\g\6\n\x\l\2\z\e\c\b\u\w\a\j\j\x\7\y\b\1\h\m\l\l\1\3\k\f\l\a\b\b\d\h\i\x\4\e\f\r\5\h\g\c\a\t\8\v\s\x\9\u\p\p\l\h\e\5\c\t\q\e\g\f\o\b\i\y\c\t\l\j\2\b\e\0\d\o\k\u\m\a\w\t\5\3\v\t\j\w\y\e\r\0\h\3\f\7\h\3\t\g\w\c\j\p\f\a\d\f\y\o\0\x\2\j\a\g\i\d\5\4\h\s\f\1\7\g\7\r\1\0\q\8\f\c\3\y\a\3\7\x\h\r\z\w\d\z\e\h\r\m\m\f\v\w\8\0\j\y\o\1\v\4\t\p\e\7\d\c\p\c\i\n\4\p\0\h\x\q\p\c\h\i\7\6\r\0\f\z\z\9\9\z\2\w\i\r\h\1\f\q\b\5\i\h\u\6\r\h\s\s\y\o\o\v\z\8\o\8\3\9\z\6\h\b\b\p\6\7\g\1\v\g\m\2\r\b\u\b\f\c\f\o\8\5\1\a\w\n\8\9\i\d\y\4\e\b\a\t\j\u\a\9\8\0\v\6\e\w\o\3\4\t\e\p\7\h\c\o\5\3\d\j\e\2\t\e\x\w\6\k\d\o\r\c\i\4\k\d\t\t\k\y\8\u\u\l\z\k\v\p\r\f\b\w\z\y\c\6\y\9\5\h\s\e\2\7\h\c\s\j\r\y\o\f\g\3\4\7\0\k\6\c\l\a\0\b\v\q\e\l\z\z\c\c\d\a\i\4\9\4\m\4\f\8\o\f\b\5\o\u\j\a\i\1\v\g\p\j\u\5\e\i\0\v\r\9\8\l\g\n\8\h ]] 00:08:12.970 02:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.970 02:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:12.970 [2024-12-05 02:52:43.696448] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:12.970 [2024-12-05 02:52:43.696609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62338 ] 00:08:13.230 [2024-12-05 02:52:43.863593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.230 [2024-12-05 02:52:43.957277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.489 [2024-12-05 02:52:44.116141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.489  [2024-12-05T02:52:45.271Z] Copying: 512/512 [B] (average 250 kBps) 00:08:14.427 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ beqbpk5oacge1fdue3eqs7476quo1h3030dorw9hqaixpoygc8kh16qrp0dlginhax4wcuaxx7jg7p9qttn30yt2u9dzqu12wqx4u15q3j19yru0c8xgl0g6nxl2zecbuwajjx7yb1hmll13kflabbdhix4efr5hgcat8vsx9upplhe5ctqegfobiyctlj2be0dokumawt53vtjwyer0h3f7h3tgwcjpfadfyo0x2jagid54hsf17g7r10q8fc3ya37xhrzwdzehrmmfvw80jyo1v4tpe7dcpcin4p0hxqpchi76r0fzz99z2wirh1fqb5ihu6rhssyoovz8o839z6hbbp67g1vgm2rbubfcfo851awn89idy4ebatjua980v6ewo34tep7hco53dje2texw6kdorci4kdttky8uulzkvprfbwzyc6y95hse27hcsjryofg3470k6cla0bvqelzzccdai494m4f8ofb5oujai1vgpju5ei0vr98lgn8h == \b\e\q\b\p\k\5\o\a\c\g\e\1\f\d\u\e\3\e\q\s\7\4\7\6\q\u\o\1\h\3\0\3\0\d\o\r\w\9\h\q\a\i\x\p\o\y\g\c\8\k\h\1\6\q\r\p\0\d\l\g\i\n\h\a\x\4\w\c\u\a\x\x\7\j\g\7\p\9\q\t\t\n\3\0\y\t\2\u\9\d\z\q\u\1\2\w\q\x\4\u\1\5\q\3\j\1\9\y\r\u\0\c\8\x\g\l\0\g\6\n\x\l\2\z\e\c\b\u\w\a\j\j\x\7\y\b\1\h\m\l\l\1\3\k\f\l\a\b\b\d\h\i\x\4\e\f\r\5\h\g\c\a\t\8\v\s\x\9\u\p\p\l\h\e\5\c\t\q\e\g\f\o\b\i\y\c\t\l\j\2\b\e\0\d\o\k\u\m\a\w\t\5\3\v\t\j\w\y\e\r\0\h\3\f\7\h\3\t\g\w\c\j\p\f\a\d\f\y\o\0\x\2\j\a\g\i\d\5\4\h\s\f\1\7\g\7\r\1\0\q\8\f\c\3\y\a\3\7\x\h\r\z\w\d\z\e\h\r\m\m\f\v\w\8\0\j\y\o\1\v\4\t\p\e\7\d\c\p\c\i\n\4\p\0\h\x\q\p\c\h\i\7\6\r\0\f\z\z\9\9\z\2\w\i\r\h\1\f\q\b\5\i\h\u\6\r\h\s\s\y\o\o\v\z\8\o\8\3\9\z\6\h\b\b\p\6\7\g\1\v\g\m\2\r\b\u\b\f\c\f\o\8\5\1\a\w\n\8\9\i\d\y\4\e\b\a\t\j\u\a\9\8\0\v\6\e\w\o\3\4\t\e\p\7\h\c\o\5\3\d\j\e\2\t\e\x\w\6\k\d\o\r\c\i\4\k\d\t\t\k\y\8\u\u\l\z\k\v\p\r\f\b\w\z\y\c\6\y\9\5\h\s\e\2\7\h\c\s\j\r\y\o\f\g\3\4\7\0\k\6\c\l\a\0\b\v\q\e\l\z\z\c\c\d\a\i\4\9\4\m\4\f\8\o\f\b\5\o\u\j\a\i\1\v\g\p\j\u\5\e\i\0\v\r\9\8\l\g\n\8\h ]] 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.427 02:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.427 [2024-12-05 02:52:45.172161] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:14.427 [2024-12-05 02:52:45.172360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:08:14.685 [2024-12-05 02:52:45.357406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.685 [2024-12-05 02:52:45.438933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.944 [2024-12-05 02:52:45.593552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.944  [2024-12-05T02:52:46.725Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.881 00:08:15.881 02:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ acnplr7vvui0t9z4vgpu17p89vytsdwir1d17mae07sccrgpf1mvqj02bho9danbatot02tpm4mb5wh96fvstp0wdx5d095w8r7uc986h3lm1lljtqo7usjg7qs9y7nwt40y0cfk8xla5q0o1plsymcple1eu1d5dz45yci71u36833s1npowcq4coye3datqjtp64ni5i2b3791nxz5gjpx8ngv1frlitmrsu6q30p8pcvpjjdize2ouk6oowvdarq488i6x1z7sz8n8s1wbnt803t95ny1v5hz7u32sfhfgzjd82oatbkma5xlcumwspans2ddb8i4sgiyfkcp2dl9uey07bd87zddb9xdn9byx4e6m68l9mnix4sub5326g4rv3jc1ir15kfjsqpg0l3xtqe0m3v2tvm85gkyg885e6q1ai5cb7b22otp8yae30m3wherpk9dyasduf7towuvcaw2vnwo465hxwe3ldgjuxf3myykwce0svchdpah == \a\c\n\p\l\r\7\v\v\u\i\0\t\9\z\4\v\g\p\u\1\7\p\8\9\v\y\t\s\d\w\i\r\1\d\1\7\m\a\e\0\7\s\c\c\r\g\p\f\1\m\v\q\j\0\2\b\h\o\9\d\a\n\b\a\t\o\t\0\2\t\p\m\4\m\b\5\w\h\9\6\f\v\s\t\p\0\w\d\x\5\d\0\9\5\w\8\r\7\u\c\9\8\6\h\3\l\m\1\l\l\j\t\q\o\7\u\s\j\g\7\q\s\9\y\7\n\w\t\4\0\y\0\c\f\k\8\x\l\a\5\q\0\o\1\p\l\s\y\m\c\p\l\e\1\e\u\1\d\5\d\z\4\5\y\c\i\7\1\u\3\6\8\3\3\s\1\n\p\o\w\c\q\4\c\o\y\e\3\d\a\t\q\j\t\p\6\4\n\i\5\i\2\b\3\7\9\1\n\x\z\5\g\j\p\x\8\n\g\v\1\f\r\l\i\t\m\r\s\u\6\q\3\0\p\8\p\c\v\p\j\j\d\i\z\e\2\o\u\k\6\o\o\w\v\d\a\r\q\4\8\8\i\6\x\1\z\7\s\z\8\n\8\s\1\w\b\n\t\8\0\3\t\9\5\n\y\1\v\5\h\z\7\u\3\2\s\f\h\f\g\z\j\d\8\2\o\a\t\b\k\m\a\5\x\l\c\u\m\w\s\p\a\n\s\2\d\d\b\8\i\4\s\g\i\y\f\k\c\p\2\d\l\9\u\e\y\0\7\b\d\8\7\z\d\d\b\9\x\d\n\9\b\y\x\4\e\6\m\6\8\l\9\m\n\i\x\4\s\u\b\5\3\2\6\g\4\r\v\3\j\c\1\i\r\1\5\k\f\j\s\q\p\g\0\l\3\x\t\q\e\0\m\3\v\2\t\v\m\8\5\g\k\y\g\8\8\5\e\6\q\1\a\i\5\c\b\7\b\2\2\o\t\p\8\y\a\e\3\0\m\3\w\h\e\r\p\k\9\d\y\a\s\d\u\f\7\t\o\w\u\v\c\a\w\2\v\n\w\o\4\6\5\h\x\w\e\3\l\d\g\j\u\x\f\3\m\y\y\k\w\c\e\0\s\v\c\h\d\p\a\h ]] 00:08:15.881 02:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.881 02:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:15.881 [2024-12-05 02:52:46.669674] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:15.881 [2024-12-05 02:52:46.669851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62376 ] 00:08:16.140 [2024-12-05 02:52:46.833932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.140 [2024-12-05 02:52:46.923814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.399 [2024-12-05 02:52:47.093330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.399  [2024-12-05T02:52:48.182Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.338 00:08:17.338 02:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ acnplr7vvui0t9z4vgpu17p89vytsdwir1d17mae07sccrgpf1mvqj02bho9danbatot02tpm4mb5wh96fvstp0wdx5d095w8r7uc986h3lm1lljtqo7usjg7qs9y7nwt40y0cfk8xla5q0o1plsymcple1eu1d5dz45yci71u36833s1npowcq4coye3datqjtp64ni5i2b3791nxz5gjpx8ngv1frlitmrsu6q30p8pcvpjjdize2ouk6oowvdarq488i6x1z7sz8n8s1wbnt803t95ny1v5hz7u32sfhfgzjd82oatbkma5xlcumwspans2ddb8i4sgiyfkcp2dl9uey07bd87zddb9xdn9byx4e6m68l9mnix4sub5326g4rv3jc1ir15kfjsqpg0l3xtqe0m3v2tvm85gkyg885e6q1ai5cb7b22otp8yae30m3wherpk9dyasduf7towuvcaw2vnwo465hxwe3ldgjuxf3myykwce0svchdpah == \a\c\n\p\l\r\7\v\v\u\i\0\t\9\z\4\v\g\p\u\1\7\p\8\9\v\y\t\s\d\w\i\r\1\d\1\7\m\a\e\0\7\s\c\c\r\g\p\f\1\m\v\q\j\0\2\b\h\o\9\d\a\n\b\a\t\o\t\0\2\t\p\m\4\m\b\5\w\h\9\6\f\v\s\t\p\0\w\d\x\5\d\0\9\5\w\8\r\7\u\c\9\8\6\h\3\l\m\1\l\l\j\t\q\o\7\u\s\j\g\7\q\s\9\y\7\n\w\t\4\0\y\0\c\f\k\8\x\l\a\5\q\0\o\1\p\l\s\y\m\c\p\l\e\1\e\u\1\d\5\d\z\4\5\y\c\i\7\1\u\3\6\8\3\3\s\1\n\p\o\w\c\q\4\c\o\y\e\3\d\a\t\q\j\t\p\6\4\n\i\5\i\2\b\3\7\9\1\n\x\z\5\g\j\p\x\8\n\g\v\1\f\r\l\i\t\m\r\s\u\6\q\3\0\p\8\p\c\v\p\j\j\d\i\z\e\2\o\u\k\6\o\o\w\v\d\a\r\q\4\8\8\i\6\x\1\z\7\s\z\8\n\8\s\1\w\b\n\t\8\0\3\t\9\5\n\y\1\v\5\h\z\7\u\3\2\s\f\h\f\g\z\j\d\8\2\o\a\t\b\k\m\a\5\x\l\c\u\m\w\s\p\a\n\s\2\d\d\b\8\i\4\s\g\i\y\f\k\c\p\2\d\l\9\u\e\y\0\7\b\d\8\7\z\d\d\b\9\x\d\n\9\b\y\x\4\e\6\m\6\8\l\9\m\n\i\x\4\s\u\b\5\3\2\6\g\4\r\v\3\j\c\1\i\r\1\5\k\f\j\s\q\p\g\0\l\3\x\t\q\e\0\m\3\v\2\t\v\m\8\5\g\k\y\g\8\8\5\e\6\q\1\a\i\5\c\b\7\b\2\2\o\t\p\8\y\a\e\3\0\m\3\w\h\e\r\p\k\9\d\y\a\s\d\u\f\7\t\o\w\u\v\c\a\w\2\v\n\w\o\4\6\5\h\x\w\e\3\l\d\g\j\u\x\f\3\m\y\y\k\w\c\e\0\s\v\c\h\d\p\a\h ]] 00:08:17.338 02:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.338 02:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.338 [2024-12-05 02:52:48.155831] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:17.338 [2024-12-05 02:52:48.155997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:08:17.601 [2024-12-05 02:52:48.329170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.601 [2024-12-05 02:52:48.414960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.860 [2024-12-05 02:52:48.563465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.860  [2024-12-05T02:52:49.642Z] Copying: 512/512 [B] (average 125 kBps) 00:08:18.798 00:08:18.798 02:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ acnplr7vvui0t9z4vgpu17p89vytsdwir1d17mae07sccrgpf1mvqj02bho9danbatot02tpm4mb5wh96fvstp0wdx5d095w8r7uc986h3lm1lljtqo7usjg7qs9y7nwt40y0cfk8xla5q0o1plsymcple1eu1d5dz45yci71u36833s1npowcq4coye3datqjtp64ni5i2b3791nxz5gjpx8ngv1frlitmrsu6q30p8pcvpjjdize2ouk6oowvdarq488i6x1z7sz8n8s1wbnt803t95ny1v5hz7u32sfhfgzjd82oatbkma5xlcumwspans2ddb8i4sgiyfkcp2dl9uey07bd87zddb9xdn9byx4e6m68l9mnix4sub5326g4rv3jc1ir15kfjsqpg0l3xtqe0m3v2tvm85gkyg885e6q1ai5cb7b22otp8yae30m3wherpk9dyasduf7towuvcaw2vnwo465hxwe3ldgjuxf3myykwce0svchdpah == \a\c\n\p\l\r\7\v\v\u\i\0\t\9\z\4\v\g\p\u\1\7\p\8\9\v\y\t\s\d\w\i\r\1\d\1\7\m\a\e\0\7\s\c\c\r\g\p\f\1\m\v\q\j\0\2\b\h\o\9\d\a\n\b\a\t\o\t\0\2\t\p\m\4\m\b\5\w\h\9\6\f\v\s\t\p\0\w\d\x\5\d\0\9\5\w\8\r\7\u\c\9\8\6\h\3\l\m\1\l\l\j\t\q\o\7\u\s\j\g\7\q\s\9\y\7\n\w\t\4\0\y\0\c\f\k\8\x\l\a\5\q\0\o\1\p\l\s\y\m\c\p\l\e\1\e\u\1\d\5\d\z\4\5\y\c\i\7\1\u\3\6\8\3\3\s\1\n\p\o\w\c\q\4\c\o\y\e\3\d\a\t\q\j\t\p\6\4\n\i\5\i\2\b\3\7\9\1\n\x\z\5\g\j\p\x\8\n\g\v\1\f\r\l\i\t\m\r\s\u\6\q\3\0\p\8\p\c\v\p\j\j\d\i\z\e\2\o\u\k\6\o\o\w\v\d\a\r\q\4\8\8\i\6\x\1\z\7\s\z\8\n\8\s\1\w\b\n\t\8\0\3\t\9\5\n\y\1\v\5\h\z\7\u\3\2\s\f\h\f\g\z\j\d\8\2\o\a\t\b\k\m\a\5\x\l\c\u\m\w\s\p\a\n\s\2\d\d\b\8\i\4\s\g\i\y\f\k\c\p\2\d\l\9\u\e\y\0\7\b\d\8\7\z\d\d\b\9\x\d\n\9\b\y\x\4\e\6\m\6\8\l\9\m\n\i\x\4\s\u\b\5\3\2\6\g\4\r\v\3\j\c\1\i\r\1\5\k\f\j\s\q\p\g\0\l\3\x\t\q\e\0\m\3\v\2\t\v\m\8\5\g\k\y\g\8\8\5\e\6\q\1\a\i\5\c\b\7\b\2\2\o\t\p\8\y\a\e\3\0\m\3\w\h\e\r\p\k\9\d\y\a\s\d\u\f\7\t\o\w\u\v\c\a\w\2\v\n\w\o\4\6\5\h\x\w\e\3\l\d\g\j\u\x\f\3\m\y\y\k\w\c\e\0\s\v\c\h\d\p\a\h ]] 00:08:18.798 02:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.798 02:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:18.798 [2024-12-05 02:52:49.621179] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:18.798 [2024-12-05 02:52:49.621356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:08:19.057 [2024-12-05 02:52:49.797840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.057 [2024-12-05 02:52:49.880542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.316 [2024-12-05 02:52:50.032335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.316  [2024-12-05T02:52:51.097Z] Copying: 512/512 [B] (average 250 kBps) 00:08:20.253 00:08:20.253 02:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ acnplr7vvui0t9z4vgpu17p89vytsdwir1d17mae07sccrgpf1mvqj02bho9danbatot02tpm4mb5wh96fvstp0wdx5d095w8r7uc986h3lm1lljtqo7usjg7qs9y7nwt40y0cfk8xla5q0o1plsymcple1eu1d5dz45yci71u36833s1npowcq4coye3datqjtp64ni5i2b3791nxz5gjpx8ngv1frlitmrsu6q30p8pcvpjjdize2ouk6oowvdarq488i6x1z7sz8n8s1wbnt803t95ny1v5hz7u32sfhfgzjd82oatbkma5xlcumwspans2ddb8i4sgiyfkcp2dl9uey07bd87zddb9xdn9byx4e6m68l9mnix4sub5326g4rv3jc1ir15kfjsqpg0l3xtqe0m3v2tvm85gkyg885e6q1ai5cb7b22otp8yae30m3wherpk9dyasduf7towuvcaw2vnwo465hxwe3ldgjuxf3myykwce0svchdpah == \a\c\n\p\l\r\7\v\v\u\i\0\t\9\z\4\v\g\p\u\1\7\p\8\9\v\y\t\s\d\w\i\r\1\d\1\7\m\a\e\0\7\s\c\c\r\g\p\f\1\m\v\q\j\0\2\b\h\o\9\d\a\n\b\a\t\o\t\0\2\t\p\m\4\m\b\5\w\h\9\6\f\v\s\t\p\0\w\d\x\5\d\0\9\5\w\8\r\7\u\c\9\8\6\h\3\l\m\1\l\l\j\t\q\o\7\u\s\j\g\7\q\s\9\y\7\n\w\t\4\0\y\0\c\f\k\8\x\l\a\5\q\0\o\1\p\l\s\y\m\c\p\l\e\1\e\u\1\d\5\d\z\4\5\y\c\i\7\1\u\3\6\8\3\3\s\1\n\p\o\w\c\q\4\c\o\y\e\3\d\a\t\q\j\t\p\6\4\n\i\5\i\2\b\3\7\9\1\n\x\z\5\g\j\p\x\8\n\g\v\1\f\r\l\i\t\m\r\s\u\6\q\3\0\p\8\p\c\v\p\j\j\d\i\z\e\2\o\u\k\6\o\o\w\v\d\a\r\q\4\8\8\i\6\x\1\z\7\s\z\8\n\8\s\1\w\b\n\t\8\0\3\t\9\5\n\y\1\v\5\h\z\7\u\3\2\s\f\h\f\g\z\j\d\8\2\o\a\t\b\k\m\a\5\x\l\c\u\m\w\s\p\a\n\s\2\d\d\b\8\i\4\s\g\i\y\f\k\c\p\2\d\l\9\u\e\y\0\7\b\d\8\7\z\d\d\b\9\x\d\n\9\b\y\x\4\e\6\m\6\8\l\9\m\n\i\x\4\s\u\b\5\3\2\6\g\4\r\v\3\j\c\1\i\r\1\5\k\f\j\s\q\p\g\0\l\3\x\t\q\e\0\m\3\v\2\t\v\m\8\5\g\k\y\g\8\8\5\e\6\q\1\a\i\5\c\b\7\b\2\2\o\t\p\8\y\a\e\3\0\m\3\w\h\e\r\p\k\9\d\y\a\s\d\u\f\7\t\o\w\u\v\c\a\w\2\v\n\w\o\4\6\5\h\x\w\e\3\l\d\g\j\u\x\f\3\m\y\y\k\w\c\e\0\s\v\c\h\d\p\a\h ]] 00:08:20.253 00:08:20.253 real 0m11.981s 00:08:20.253 user 0m9.504s 00:08:20.253 sys 0m1.494s 00:08:20.253 02:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.253 02:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.253 ************************************ 00:08:20.253 END TEST dd_flags_misc_forced_aio 00:08:20.253 ************************************ 00:08:20.253 02:52:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:20.254 02:52:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:20.254 02:52:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:20.254 00:08:20.254 real 0m50.533s 00:08:20.254 user 0m38.164s 00:08:20.254 sys 0m14.271s 00:08:20.254 02:52:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.254 02:52:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:20.254 ************************************ 00:08:20.254 END TEST spdk_dd_posix 00:08:20.254 ************************************ 00:08:20.254 02:52:51 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:20.254 02:52:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.254 02:52:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.254 02:52:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:20.254 ************************************ 00:08:20.254 START TEST spdk_dd_malloc 00:08:20.254 ************************************ 00:08:20.254 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:20.513 * Looking for test storage... 00:08:20.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.513 --rc genhtml_branch_coverage=1 00:08:20.513 --rc genhtml_function_coverage=1 00:08:20.513 --rc genhtml_legend=1 00:08:20.513 --rc geninfo_all_blocks=1 00:08:20.513 --rc geninfo_unexecuted_blocks=1 00:08:20.513 00:08:20.513 ' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.513 --rc genhtml_branch_coverage=1 00:08:20.513 --rc genhtml_function_coverage=1 00:08:20.513 --rc genhtml_legend=1 00:08:20.513 --rc geninfo_all_blocks=1 00:08:20.513 --rc geninfo_unexecuted_blocks=1 00:08:20.513 00:08:20.513 ' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.513 --rc genhtml_branch_coverage=1 00:08:20.513 --rc genhtml_function_coverage=1 00:08:20.513 --rc genhtml_legend=1 00:08:20.513 --rc geninfo_all_blocks=1 00:08:20.513 --rc geninfo_unexecuted_blocks=1 00:08:20.513 00:08:20.513 ' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.513 --rc genhtml_branch_coverage=1 00:08:20.513 --rc genhtml_function_coverage=1 00:08:20.513 --rc genhtml_legend=1 00:08:20.513 --rc geninfo_all_blocks=1 00:08:20.513 --rc geninfo_unexecuted_blocks=1 00:08:20.513 00:08:20.513 ' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:20.513 ************************************ 00:08:20.513 START TEST dd_malloc_copy 00:08:20.513 ************************************ 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:20.513 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:20.514 02:52:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.772 { 00:08:20.772 "subsystems": [ 00:08:20.772 { 00:08:20.772 "subsystem": "bdev", 00:08:20.772 "config": [ 00:08:20.772 { 00:08:20.772 "params": { 00:08:20.772 "block_size": 512, 00:08:20.772 "num_blocks": 1048576, 00:08:20.772 "name": "malloc0" 00:08:20.772 }, 00:08:20.772 "method": "bdev_malloc_create" 00:08:20.772 }, 00:08:20.772 { 00:08:20.772 "params": { 00:08:20.772 "block_size": 512, 00:08:20.772 "num_blocks": 1048576, 00:08:20.772 "name": "malloc1" 00:08:20.772 }, 00:08:20.772 "method": "bdev_malloc_create" 00:08:20.772 }, 00:08:20.772 { 00:08:20.772 "method": "bdev_wait_for_examine" 00:08:20.772 } 00:08:20.772 ] 00:08:20.772 } 00:08:20.772 ] 00:08:20.772 } 00:08:20.772 [2024-12-05 02:52:51.406740] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:20.772 [2024-12-05 02:52:51.406948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:08:20.772 [2024-12-05 02:52:51.588078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.031 [2024-12-05 02:52:51.689875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.031 [2024-12-05 02:52:51.862616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.566  [2024-12-05T02:52:54.978Z] Copying: 190/512 [MB] (190 MBps) [2024-12-05T02:52:55.914Z] Copying: 372/512 [MB] (181 MBps) [2024-12-05T02:52:59.256Z] Copying: 512/512 [MB] (average 184 MBps) 00:08:28.412 00:08:28.412 02:52:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:28.412 02:52:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:28.412 02:52:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:28.412 02:52:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.412 { 00:08:28.412 "subsystems": [ 00:08:28.412 { 00:08:28.412 "subsystem": "bdev", 00:08:28.412 "config": [ 00:08:28.412 { 00:08:28.412 "params": { 00:08:28.412 "block_size": 512, 00:08:28.412 "num_blocks": 1048576, 00:08:28.412 "name": "malloc0" 00:08:28.412 }, 00:08:28.412 "method": "bdev_malloc_create" 00:08:28.412 }, 00:08:28.412 { 00:08:28.412 "params": { 00:08:28.412 "block_size": 512, 00:08:28.412 "num_blocks": 1048576, 00:08:28.412 "name": "malloc1" 00:08:28.412 }, 00:08:28.412 "method": "bdev_malloc_create" 00:08:28.412 }, 00:08:28.412 { 00:08:28.412 "method": "bdev_wait_for_examine" 00:08:28.412 } 00:08:28.412 ] 00:08:28.412 } 00:08:28.412 ] 00:08:28.412 } 00:08:28.412 [2024-12-05 02:52:58.706830] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:28.412 [2024-12-05 02:52:58.706995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62586 ] 00:08:28.412 [2024-12-05 02:52:58.888542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.412 [2024-12-05 02:52:58.978990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.412 [2024-12-05 02:52:59.148914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.318  [2024-12-05T02:53:02.099Z] Copying: 181/512 [MB] (181 MBps) [2024-12-05T02:53:03.037Z] Copying: 360/512 [MB] (178 MBps) [2024-12-05T02:53:06.327Z] Copying: 512/512 [MB] (average 179 MBps) 00:08:35.483 00:08:35.483 00:08:35.483 real 0m14.681s 00:08:35.483 user 0m13.650s 00:08:35.483 sys 0m0.837s 00:08:35.483 02:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.483 02:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:35.483 ************************************ 00:08:35.483 END TEST dd_malloc_copy 00:08:35.483 ************************************ 00:08:35.483 00:08:35.483 real 0m14.930s 00:08:35.483 user 0m13.787s 00:08:35.483 sys 0m0.950s 00:08:35.483 02:53:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.483 02:53:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:35.483 ************************************ 00:08:35.483 END TEST spdk_dd_malloc 00:08:35.483 ************************************ 00:08:35.483 02:53:06 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:35.483 02:53:06 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:35.483 02:53:06 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.483 02:53:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:35.483 ************************************ 00:08:35.483 START TEST spdk_dd_bdev_to_bdev 00:08:35.483 ************************************ 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:35.483 * Looking for test storage... 00:08:35.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.483 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.484 --rc genhtml_branch_coverage=1 00:08:35.484 --rc genhtml_function_coverage=1 00:08:35.484 --rc genhtml_legend=1 00:08:35.484 --rc geninfo_all_blocks=1 00:08:35.484 --rc geninfo_unexecuted_blocks=1 00:08:35.484 00:08:35.484 ' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.484 --rc genhtml_branch_coverage=1 00:08:35.484 --rc genhtml_function_coverage=1 00:08:35.484 --rc genhtml_legend=1 00:08:35.484 --rc geninfo_all_blocks=1 00:08:35.484 --rc geninfo_unexecuted_blocks=1 00:08:35.484 00:08:35.484 ' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.484 --rc genhtml_branch_coverage=1 00:08:35.484 --rc genhtml_function_coverage=1 00:08:35.484 --rc genhtml_legend=1 00:08:35.484 --rc geninfo_all_blocks=1 00:08:35.484 --rc geninfo_unexecuted_blocks=1 00:08:35.484 00:08:35.484 ' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.484 --rc genhtml_branch_coverage=1 00:08:35.484 --rc genhtml_function_coverage=1 00:08:35.484 --rc genhtml_legend=1 00:08:35.484 --rc geninfo_all_blocks=1 00:08:35.484 --rc geninfo_unexecuted_blocks=1 00:08:35.484 00:08:35.484 ' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:35.484 ************************************ 00:08:35.484 START TEST dd_inflate_file 00:08:35.484 ************************************ 00:08:35.484 02:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:35.743 [2024-12-05 02:53:06.390736] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:35.743 [2024-12-05 02:53:06.390986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62745 ] 00:08:35.743 [2024-12-05 02:53:06.572941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.002 [2024-12-05 02:53:06.662922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.002 [2024-12-05 02:53:06.815660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.262  [2024-12-05T02:53:08.044Z] Copying: 64/64 [MB] (average 1729 MBps) 00:08:37.200 00:08:37.200 00:08:37.200 real 0m1.584s 00:08:37.200 user 0m1.268s 00:08:37.200 sys 0m0.919s 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.200 ************************************ 00:08:37.200 END TEST dd_inflate_file 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:37.200 ************************************ 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.200 ************************************ 00:08:37.200 START TEST dd_copy_to_out_bdev 00:08:37.200 ************************************ 00:08:37.200 02:53:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:37.200 { 00:08:37.200 "subsystems": [ 00:08:37.200 { 00:08:37.200 "subsystem": "bdev", 00:08:37.200 "config": [ 00:08:37.200 { 00:08:37.200 "params": { 00:08:37.200 "trtype": "pcie", 00:08:37.200 "traddr": "0000:00:10.0", 00:08:37.200 "name": "Nvme0" 00:08:37.200 }, 00:08:37.200 "method": "bdev_nvme_attach_controller" 00:08:37.200 }, 00:08:37.200 { 00:08:37.200 "params": { 00:08:37.200 "trtype": "pcie", 00:08:37.200 "traddr": "0000:00:11.0", 00:08:37.200 "name": "Nvme1" 00:08:37.200 }, 00:08:37.200 "method": "bdev_nvme_attach_controller" 00:08:37.200 }, 00:08:37.200 { 00:08:37.200 "method": "bdev_wait_for_examine" 00:08:37.200 } 00:08:37.200 ] 00:08:37.200 } 00:08:37.200 ] 00:08:37.200 } 00:08:37.200 [2024-12-05 02:53:08.000083] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:37.200 [2024-12-05 02:53:08.000263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62789 ] 00:08:37.460 [2024-12-05 02:53:08.179832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.460 [2024-12-05 02:53:08.266783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.719 [2024-12-05 02:53:08.422259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.099  [2024-12-05T02:53:10.202Z] Copying: 48/64 [MB] (48 MBps) [2024-12-05T02:53:11.137Z] Copying: 64/64 [MB] (average 48 MBps) 00:08:40.293 00:08:40.293 00:08:40.293 real 0m3.073s 00:08:40.293 user 0m2.767s 00:08:40.293 sys 0m2.282s 00:08:40.293 02:53:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.293 02:53:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.293 ************************************ 00:08:40.293 END TEST dd_copy_to_out_bdev 00:08:40.293 ************************************ 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.293 ************************************ 00:08:40.293 START TEST dd_offset_magic 00:08:40.293 ************************************ 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:40.293 02:53:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:40.293 { 00:08:40.293 "subsystems": [ 00:08:40.293 { 00:08:40.293 "subsystem": "bdev", 00:08:40.293 "config": [ 00:08:40.293 { 00:08:40.293 "params": { 00:08:40.293 "trtype": "pcie", 00:08:40.293 "traddr": "0000:00:10.0", 00:08:40.293 "name": "Nvme0" 00:08:40.293 }, 00:08:40.293 "method": "bdev_nvme_attach_controller" 00:08:40.293 }, 00:08:40.293 { 00:08:40.293 "params": { 00:08:40.293 "trtype": "pcie", 00:08:40.293 "traddr": "0000:00:11.0", 00:08:40.293 "name": "Nvme1" 00:08:40.293 }, 00:08:40.293 "method": "bdev_nvme_attach_controller" 00:08:40.293 }, 00:08:40.293 { 00:08:40.293 "method": "bdev_wait_for_examine" 00:08:40.293 } 00:08:40.293 ] 00:08:40.293 } 00:08:40.293 ] 00:08:40.293 } 00:08:40.293 [2024-12-05 02:53:11.110442] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:40.293 [2024-12-05 02:53:11.110603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62846 ] 00:08:40.551 [2024-12-05 02:53:11.278284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.551 [2024-12-05 02:53:11.373209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.820 [2024-12-05 02:53:11.536117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.092  [2024-12-05T02:53:12.927Z] Copying: 65/65 [MB] (average 1140 MBps) 00:08:42.083 00:08:42.083 02:53:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:42.083 02:53:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:42.083 02:53:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:42.083 02:53:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:42.083 { 00:08:42.083 "subsystems": [ 00:08:42.083 { 00:08:42.083 "subsystem": "bdev", 00:08:42.083 "config": [ 00:08:42.083 { 00:08:42.083 "params": { 00:08:42.083 "trtype": "pcie", 00:08:42.083 "traddr": "0000:00:10.0", 00:08:42.083 "name": "Nvme0" 00:08:42.083 }, 00:08:42.083 "method": "bdev_nvme_attach_controller" 00:08:42.083 }, 00:08:42.083 { 00:08:42.083 "params": { 00:08:42.083 "trtype": "pcie", 00:08:42.083 "traddr": "0000:00:11.0", 00:08:42.083 "name": "Nvme1" 00:08:42.083 }, 00:08:42.083 "method": "bdev_nvme_attach_controller" 00:08:42.083 }, 00:08:42.083 { 00:08:42.083 "method": "bdev_wait_for_examine" 00:08:42.083 } 00:08:42.083 ] 00:08:42.083 } 00:08:42.083 ] 00:08:42.083 } 00:08:42.083 [2024-12-05 02:53:12.782025] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:42.083 [2024-12-05 02:53:12.782220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62873 ] 00:08:42.342 [2024-12-05 02:53:12.958480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.342 [2024-12-05 02:53:13.052692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.602 [2024-12-05 02:53:13.216754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.862  [2024-12-05T02:53:14.640Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:43.796 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:43.796 02:53:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:43.796 { 00:08:43.796 "subsystems": [ 00:08:43.796 { 00:08:43.796 "subsystem": "bdev", 00:08:43.796 "config": [ 00:08:43.796 { 00:08:43.796 "params": { 00:08:43.796 "trtype": "pcie", 00:08:43.796 "traddr": "0000:00:10.0", 00:08:43.796 "name": "Nvme0" 00:08:43.796 }, 00:08:43.796 "method": "bdev_nvme_attach_controller" 00:08:43.796 }, 00:08:43.796 { 00:08:43.796 "params": { 00:08:43.796 "trtype": "pcie", 00:08:43.796 "traddr": "0000:00:11.0", 00:08:43.796 "name": "Nvme1" 00:08:43.796 }, 00:08:43.796 "method": "bdev_nvme_attach_controller" 00:08:43.796 }, 00:08:43.796 { 00:08:43.796 "method": "bdev_wait_for_examine" 00:08:43.796 } 00:08:43.796 ] 00:08:43.796 } 00:08:43.796 ] 00:08:43.796 } 00:08:43.796 [2024-12-05 02:53:14.466097] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:43.796 [2024-12-05 02:53:14.466266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62907 ] 00:08:44.054 [2024-12-05 02:53:14.643908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.054 [2024-12-05 02:53:14.729925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.054 [2024-12-05 02:53:14.888119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.622  [2024-12-05T02:53:16.032Z] Copying: 65/65 [MB] (average 1226 MBps) 00:08:45.188 00:08:45.188 02:53:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:45.188 02:53:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:45.188 02:53:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:45.188 02:53:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.188 { 00:08:45.188 "subsystems": [ 00:08:45.188 { 00:08:45.188 "subsystem": "bdev", 00:08:45.188 "config": [ 00:08:45.188 { 00:08:45.188 "params": { 00:08:45.188 "trtype": "pcie", 00:08:45.188 "traddr": "0000:00:10.0", 00:08:45.188 "name": "Nvme0" 00:08:45.188 }, 00:08:45.188 "method": "bdev_nvme_attach_controller" 00:08:45.188 }, 00:08:45.188 { 00:08:45.188 "params": { 00:08:45.188 "trtype": "pcie", 00:08:45.188 "traddr": "0000:00:11.0", 00:08:45.188 "name": "Nvme1" 00:08:45.188 }, 00:08:45.188 "method": "bdev_nvme_attach_controller" 00:08:45.188 }, 00:08:45.188 { 00:08:45.188 "method": "bdev_wait_for_examine" 00:08:45.188 } 00:08:45.188 ] 00:08:45.188 } 00:08:45.188 ] 00:08:45.188 } 00:08:45.447 [2024-12-05 02:53:16.041409] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:45.447 [2024-12-05 02:53:16.042146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62928 ] 00:08:45.447 [2024-12-05 02:53:16.223904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.706 [2024-12-05 02:53:16.318418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.706 [2024-12-05 02:53:16.469388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.965  [2024-12-05T02:53:17.746Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:46.902 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:46.902 00:08:46.902 real 0m6.583s 00:08:46.902 user 0m5.494s 00:08:46.902 sys 0m2.222s 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:46.902 ************************************ 00:08:46.902 END TEST dd_offset_magic 00:08:46.902 ************************************ 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:46.902 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:46.903 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:46.903 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:46.903 02:53:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:46.903 { 00:08:46.903 "subsystems": [ 00:08:46.903 { 00:08:46.903 "subsystem": "bdev", 00:08:46.903 "config": [ 00:08:46.903 { 00:08:46.903 "params": { 00:08:46.903 "trtype": "pcie", 00:08:46.903 "traddr": "0000:00:10.0", 00:08:46.903 "name": "Nvme0" 00:08:46.903 }, 00:08:46.903 "method": "bdev_nvme_attach_controller" 00:08:46.903 }, 00:08:46.903 { 00:08:46.903 "params": { 00:08:46.903 "trtype": "pcie", 00:08:46.903 "traddr": "0000:00:11.0", 00:08:46.903 "name": "Nvme1" 00:08:46.903 }, 00:08:46.903 "method": "bdev_nvme_attach_controller" 00:08:46.903 }, 00:08:46.903 { 00:08:46.903 "method": "bdev_wait_for_examine" 00:08:46.903 } 00:08:46.903 ] 00:08:46.903 } 00:08:46.903 ] 00:08:46.903 } 00:08:46.903 [2024-12-05 02:53:17.732954] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:46.903 [2024-12-05 02:53:17.733090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62977 ] 00:08:47.163 [2024-12-05 02:53:17.895856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.163 [2024-12-05 02:53:17.978013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.422 [2024-12-05 02:53:18.129487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.681  [2024-12-05T02:53:19.093Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:48.249 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:48.249 02:53:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 { 00:08:48.508 "subsystems": [ 00:08:48.508 { 00:08:48.508 "subsystem": "bdev", 00:08:48.508 "config": [ 00:08:48.508 { 00:08:48.508 "params": { 00:08:48.508 "trtype": "pcie", 00:08:48.508 "traddr": "0000:00:10.0", 00:08:48.508 "name": "Nvme0" 00:08:48.508 }, 00:08:48.508 "method": "bdev_nvme_attach_controller" 00:08:48.508 }, 00:08:48.508 { 00:08:48.508 "params": { 00:08:48.509 "trtype": "pcie", 00:08:48.509 "traddr": "0000:00:11.0", 00:08:48.509 "name": "Nvme1" 00:08:48.509 }, 00:08:48.509 "method": "bdev_nvme_attach_controller" 00:08:48.509 }, 00:08:48.509 { 00:08:48.509 "method": "bdev_wait_for_examine" 00:08:48.509 } 00:08:48.509 ] 00:08:48.509 } 00:08:48.509 ] 00:08:48.509 } 00:08:48.509 [2024-12-05 02:53:19.148053] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:48.509 [2024-12-05 02:53:19.148239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62999 ] 00:08:48.509 [2024-12-05 02:53:19.317162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.767 [2024-12-05 02:53:19.421360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.767 [2024-12-05 02:53:19.582710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.026  [2024-12-05T02:53:20.805Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:49.961 00:08:49.961 02:53:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:49.961 00:08:49.961 real 0m14.659s 00:08:49.961 user 0m12.269s 00:08:49.961 sys 0m7.115s 00:08:49.961 02:53:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.961 02:53:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:49.961 ************************************ 00:08:49.961 END TEST spdk_dd_bdev_to_bdev 00:08:49.961 ************************************ 00:08:49.961 02:53:20 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:49.961 02:53:20 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:49.961 02:53:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.961 02:53:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.961 02:53:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:49.961 ************************************ 00:08:49.961 START TEST spdk_dd_uring 00:08:49.961 ************************************ 00:08:49.961 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:50.221 * Looking for test storage... 00:08:50.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.221 --rc genhtml_branch_coverage=1 00:08:50.221 --rc genhtml_function_coverage=1 00:08:50.221 --rc genhtml_legend=1 00:08:50.221 --rc geninfo_all_blocks=1 00:08:50.221 --rc geninfo_unexecuted_blocks=1 00:08:50.221 00:08:50.221 ' 00:08:50.221 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.221 --rc genhtml_branch_coverage=1 00:08:50.222 --rc genhtml_function_coverage=1 00:08:50.222 --rc genhtml_legend=1 00:08:50.222 --rc geninfo_all_blocks=1 00:08:50.222 --rc geninfo_unexecuted_blocks=1 00:08:50.222 00:08:50.222 ' 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.222 --rc genhtml_branch_coverage=1 00:08:50.222 --rc genhtml_function_coverage=1 00:08:50.222 --rc genhtml_legend=1 00:08:50.222 --rc geninfo_all_blocks=1 00:08:50.222 --rc geninfo_unexecuted_blocks=1 00:08:50.222 00:08:50.222 ' 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.222 --rc genhtml_branch_coverage=1 00:08:50.222 --rc genhtml_function_coverage=1 00:08:50.222 --rc genhtml_legend=1 00:08:50.222 --rc geninfo_all_blocks=1 00:08:50.222 --rc geninfo_unexecuted_blocks=1 00:08:50.222 00:08:50.222 ' 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:50.222 ************************************ 00:08:50.222 START TEST dd_uring_copy 00:08:50.222 ************************************ 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:50.222 02:53:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.222 02:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=rc68bh9l75079ys4645r7regmusaekfx6qbcd44tj8j58q7uf7gwiwej36wq1caim7jvagk4v3v2f8i7dyge27nmzcwuyq45ym1onj3nuh7fbmqh3lr5tdlepvoc4fwtr15qnxteyd03dwz1tfkagzrh533ixpe0qqf3qmzwi9254f8a90f65lclg3gl7z9db3vsr35s1y9ok3a5utq3j8r45u7y9ws3hfm47rpr3f7x6dzlv5waffxhdl5bjtrjxug3noppk7grxtimx1k04ebbyz64wqvmzg08alcrtt3dmo1pcy4o92qbuafbq5lenat3nue3fq8n7utt69yra9xy56kw370or22gi1u79rroy2kzpkfdbdkdww0jyasqsgti16rnnsir7y6yybdbk106lbmtkqcqyl6j7r4pxofdcqxhagzvc1dswn0ptjvdj404mavpi1pnw6qgbbhwgg0uwxsd3zz414y002jfvmp502tkqsso5zaivqb10pxotstic92hl2sntypk71u0wgo8fn14gl3q4kot0ve466k69gl5o0xib63hnqvf0us457tq2qxuhsmunr7na2v253yp6atqt0e39o0fh4zsjor9thpuxpbmo8hdg9ivwfo9vvlzakqkofhutj0gbx3p8znr39xn06j7r0divj52cnbcphm5tei46wr5yq7ilvebt6vzb92gw5tyu8641pypo208e9ey6h0mrt12sal7j464yfmcx9hhj2tj6mc3eictwousxo682im34dt5i442j6a7uwgj384j6qfoj71hkxsfaatiw4e8z5svyihs9npww2qi9714e1l4m2urbxt9csex7j8slzuxxks30hzm018c5u0w7jcv6dxg5avxw7k4saa6sw6yp4m92n6z76ovr4847hw87pns9ca2y8s9ayaxfy8vic4dya5ghtus294rrjlg1jkrj83wl1md8sx1pluf8zd47zhajztsa5j3kyihqjb6noaz2o72ahpdnj48 00:08:50.222 02:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo rc68bh9l75079ys4645r7regmusaekfx6qbcd44tj8j58q7uf7gwiwej36wq1caim7jvagk4v3v2f8i7dyge27nmzcwuyq45ym1onj3nuh7fbmqh3lr5tdlepvoc4fwtr15qnxteyd03dwz1tfkagzrh533ixpe0qqf3qmzwi9254f8a90f65lclg3gl7z9db3vsr35s1y9ok3a5utq3j8r45u7y9ws3hfm47rpr3f7x6dzlv5waffxhdl5bjtrjxug3noppk7grxtimx1k04ebbyz64wqvmzg08alcrtt3dmo1pcy4o92qbuafbq5lenat3nue3fq8n7utt69yra9xy56kw370or22gi1u79rroy2kzpkfdbdkdww0jyasqsgti16rnnsir7y6yybdbk106lbmtkqcqyl6j7r4pxofdcqxhagzvc1dswn0ptjvdj404mavpi1pnw6qgbbhwgg0uwxsd3zz414y002jfvmp502tkqsso5zaivqb10pxotstic92hl2sntypk71u0wgo8fn14gl3q4kot0ve466k69gl5o0xib63hnqvf0us457tq2qxuhsmunr7na2v253yp6atqt0e39o0fh4zsjor9thpuxpbmo8hdg9ivwfo9vvlzakqkofhutj0gbx3p8znr39xn06j7r0divj52cnbcphm5tei46wr5yq7ilvebt6vzb92gw5tyu8641pypo208e9ey6h0mrt12sal7j464yfmcx9hhj2tj6mc3eictwousxo682im34dt5i442j6a7uwgj384j6qfoj71hkxsfaatiw4e8z5svyihs9npww2qi9714e1l4m2urbxt9csex7j8slzuxxks30hzm018c5u0w7jcv6dxg5avxw7k4saa6sw6yp4m92n6z76ovr4847hw87pns9ca2y8s9ayaxfy8vic4dya5ghtus294rrjlg1jkrj83wl1md8sx1pluf8zd47zhajztsa5j3kyihqjb6noaz2o72ahpdnj48 00:08:50.222 02:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:50.481 [2024-12-05 02:53:21.113322] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:50.481 [2024-12-05 02:53:21.113500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63089 ] 00:08:50.481 [2024-12-05 02:53:21.289138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.740 [2024-12-05 02:53:21.387534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.740 [2024-12-05 02:53:21.532946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.676  [2024-12-05T02:53:24.425Z] Copying: 511/511 [MB] (average 1395 MBps) 00:08:53.581 00:08:53.581 02:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:53.581 02:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:53.581 02:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:53.581 02:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:53.581 { 00:08:53.581 "subsystems": [ 00:08:53.581 { 00:08:53.581 "subsystem": "bdev", 00:08:53.581 "config": [ 00:08:53.581 { 00:08:53.581 "params": { 00:08:53.581 "block_size": 512, 00:08:53.581 "num_blocks": 1048576, 00:08:53.581 "name": "malloc0" 00:08:53.581 }, 00:08:53.581 "method": "bdev_malloc_create" 00:08:53.581 }, 00:08:53.581 { 00:08:53.581 "params": { 00:08:53.581 "filename": "/dev/zram1", 00:08:53.581 "name": "uring0" 00:08:53.581 }, 00:08:53.581 "method": "bdev_uring_create" 00:08:53.581 }, 00:08:53.581 { 00:08:53.581 "method": "bdev_wait_for_examine" 00:08:53.581 } 00:08:53.581 ] 00:08:53.581 } 00:08:53.581 ] 00:08:53.581 } 00:08:53.581 [2024-12-05 02:53:24.380321] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:53.581 [2024-12-05 02:53:24.380454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63128 ] 00:08:53.840 [2024-12-05 02:53:24.548333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.840 [2024-12-05 02:53:24.639658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.099 [2024-12-05 02:53:24.795553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.487  [2024-12-05T02:53:27.707Z] Copying: 191/512 [MB] (191 MBps) [2024-12-05T02:53:28.274Z] Copying: 384/512 [MB] (192 MBps) [2024-12-05T02:53:30.178Z] Copying: 512/512 [MB] (average 193 MBps) 00:08:59.334 00:08:59.335 02:53:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:59.335 02:53:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:59.335 02:53:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:59.335 02:53:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.335 { 00:08:59.335 "subsystems": [ 00:08:59.335 { 00:08:59.335 "subsystem": "bdev", 00:08:59.335 "config": [ 00:08:59.335 { 00:08:59.335 "params": { 00:08:59.335 "block_size": 512, 00:08:59.335 "num_blocks": 1048576, 00:08:59.335 "name": "malloc0" 00:08:59.335 }, 00:08:59.335 "method": "bdev_malloc_create" 00:08:59.335 }, 00:08:59.335 { 00:08:59.335 "params": { 00:08:59.335 "filename": "/dev/zram1", 00:08:59.335 "name": "uring0" 00:08:59.335 }, 00:08:59.335 "method": "bdev_uring_create" 00:08:59.335 }, 00:08:59.335 { 00:08:59.335 "method": "bdev_wait_for_examine" 00:08:59.335 } 00:08:59.335 ] 00:08:59.335 } 00:08:59.335 ] 00:08:59.335 } 00:08:59.335 [2024-12-05 02:53:29.942659] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:59.335 [2024-12-05 02:53:29.942829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63201 ] 00:08:59.335 [2024-12-05 02:53:30.122469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.593 [2024-12-05 02:53:30.207289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.593 [2024-12-05 02:53:30.368120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.495  [2024-12-05T02:53:32.908Z] Copying: 158/512 [MB] (158 MBps) [2024-12-05T02:53:34.287Z] Copying: 313/512 [MB] (155 MBps) [2024-12-05T02:53:34.287Z] Copying: 453/512 [MB] (139 MBps) [2024-12-05T02:53:36.200Z] Copying: 512/512 [MB] (average 151 MBps) 00:09:05.356 00:09:05.356 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:05.356 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ rc68bh9l75079ys4645r7regmusaekfx6qbcd44tj8j58q7uf7gwiwej36wq1caim7jvagk4v3v2f8i7dyge27nmzcwuyq45ym1onj3nuh7fbmqh3lr5tdlepvoc4fwtr15qnxteyd03dwz1tfkagzrh533ixpe0qqf3qmzwi9254f8a90f65lclg3gl7z9db3vsr35s1y9ok3a5utq3j8r45u7y9ws3hfm47rpr3f7x6dzlv5waffxhdl5bjtrjxug3noppk7grxtimx1k04ebbyz64wqvmzg08alcrtt3dmo1pcy4o92qbuafbq5lenat3nue3fq8n7utt69yra9xy56kw370or22gi1u79rroy2kzpkfdbdkdww0jyasqsgti16rnnsir7y6yybdbk106lbmtkqcqyl6j7r4pxofdcqxhagzvc1dswn0ptjvdj404mavpi1pnw6qgbbhwgg0uwxsd3zz414y002jfvmp502tkqsso5zaivqb10pxotstic92hl2sntypk71u0wgo8fn14gl3q4kot0ve466k69gl5o0xib63hnqvf0us457tq2qxuhsmunr7na2v253yp6atqt0e39o0fh4zsjor9thpuxpbmo8hdg9ivwfo9vvlzakqkofhutj0gbx3p8znr39xn06j7r0divj52cnbcphm5tei46wr5yq7ilvebt6vzb92gw5tyu8641pypo208e9ey6h0mrt12sal7j464yfmcx9hhj2tj6mc3eictwousxo682im34dt5i442j6a7uwgj384j6qfoj71hkxsfaatiw4e8z5svyihs9npww2qi9714e1l4m2urbxt9csex7j8slzuxxks30hzm018c5u0w7jcv6dxg5avxw7k4saa6sw6yp4m92n6z76ovr4847hw87pns9ca2y8s9ayaxfy8vic4dya5ghtus294rrjlg1jkrj83wl1md8sx1pluf8zd47zhajztsa5j3kyihqjb6noaz2o72ahpdnj48 == \r\c\6\8\b\h\9\l\7\5\0\7\9\y\s\4\6\4\5\r\7\r\e\g\m\u\s\a\e\k\f\x\6\q\b\c\d\4\4\t\j\8\j\5\8\q\7\u\f\7\g\w\i\w\e\j\3\6\w\q\1\c\a\i\m\7\j\v\a\g\k\4\v\3\v\2\f\8\i\7\d\y\g\e\2\7\n\m\z\c\w\u\y\q\4\5\y\m\1\o\n\j\3\n\u\h\7\f\b\m\q\h\3\l\r\5\t\d\l\e\p\v\o\c\4\f\w\t\r\1\5\q\n\x\t\e\y\d\0\3\d\w\z\1\t\f\k\a\g\z\r\h\5\3\3\i\x\p\e\0\q\q\f\3\q\m\z\w\i\9\2\5\4\f\8\a\9\0\f\6\5\l\c\l\g\3\g\l\7\z\9\d\b\3\v\s\r\3\5\s\1\y\9\o\k\3\a\5\u\t\q\3\j\8\r\4\5\u\7\y\9\w\s\3\h\f\m\4\7\r\p\r\3\f\7\x\6\d\z\l\v\5\w\a\f\f\x\h\d\l\5\b\j\t\r\j\x\u\g\3\n\o\p\p\k\7\g\r\x\t\i\m\x\1\k\0\4\e\b\b\y\z\6\4\w\q\v\m\z\g\0\8\a\l\c\r\t\t\3\d\m\o\1\p\c\y\4\o\9\2\q\b\u\a\f\b\q\5\l\e\n\a\t\3\n\u\e\3\f\q\8\n\7\u\t\t\6\9\y\r\a\9\x\y\5\6\k\w\3\7\0\o\r\2\2\g\i\1\u\7\9\r\r\o\y\2\k\z\p\k\f\d\b\d\k\d\w\w\0\j\y\a\s\q\s\g\t\i\1\6\r\n\n\s\i\r\7\y\6\y\y\b\d\b\k\1\0\6\l\b\m\t\k\q\c\q\y\l\6\j\7\r\4\p\x\o\f\d\c\q\x\h\a\g\z\v\c\1\d\s\w\n\0\p\t\j\v\d\j\4\0\4\m\a\v\p\i\1\p\n\w\6\q\g\b\b\h\w\g\g\0\u\w\x\s\d\3\z\z\4\1\4\y\0\0\2\j\f\v\m\p\5\0\2\t\k\q\s\s\o\5\z\a\i\v\q\b\1\0\p\x\o\t\s\t\i\c\9\2\h\l\2\s\n\t\y\p\k\7\1\u\0\w\g\o\8\f\n\1\4\g\l\3\q\4\k\o\t\0\v\e\4\6\6\k\6\9\g\l\5\o\0\x\i\b\6\3\h\n\q\v\f\0\u\s\4\5\7\t\q\2\q\x\u\h\s\m\u\n\r\7\n\a\2\v\2\5\3\y\p\6\a\t\q\t\0\e\3\9\o\0\f\h\4\z\s\j\o\r\9\t\h\p\u\x\p\b\m\o\8\h\d\g\9\i\v\w\f\o\9\v\v\l\z\a\k\q\k\o\f\h\u\t\j\0\g\b\x\3\p\8\z\n\r\3\9\x\n\0\6\j\7\r\0\d\i\v\j\5\2\c\n\b\c\p\h\m\5\t\e\i\4\6\w\r\5\y\q\7\i\l\v\e\b\t\6\v\z\b\9\2\g\w\5\t\y\u\8\6\4\1\p\y\p\o\2\0\8\e\9\e\y\6\h\0\m\r\t\1\2\s\a\l\7\j\4\6\4\y\f\m\c\x\9\h\h\j\2\t\j\6\m\c\3\e\i\c\t\w\o\u\s\x\o\6\8\2\i\m\3\4\d\t\5\i\4\4\2\j\6\a\7\u\w\g\j\3\8\4\j\6\q\f\o\j\7\1\h\k\x\s\f\a\a\t\i\w\4\e\8\z\5\s\v\y\i\h\s\9\n\p\w\w\2\q\i\9\7\1\4\e\1\l\4\m\2\u\r\b\x\t\9\c\s\e\x\7\j\8\s\l\z\u\x\x\k\s\3\0\h\z\m\0\1\8\c\5\u\0\w\7\j\c\v\6\d\x\g\5\a\v\x\w\7\k\4\s\a\a\6\s\w\6\y\p\4\m\9\2\n\6\z\7\6\o\v\r\4\8\4\7\h\w\8\7\p\n\s\9\c\a\2\y\8\s\9\a\y\a\x\f\y\8\v\i\c\4\d\y\a\5\g\h\t\u\s\2\9\4\r\r\j\l\g\1\j\k\r\j\8\3\w\l\1\m\d\8\s\x\1\p\l\u\f\8\z\d\4\7\z\h\a\j\z\t\s\a\5\j\3\k\y\i\h\q\j\b\6\n\o\a\z\2\o\7\2\a\h\p\d\n\j\4\8 ]] 00:09:05.356 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:05.356 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ rc68bh9l75079ys4645r7regmusaekfx6qbcd44tj8j58q7uf7gwiwej36wq1caim7jvagk4v3v2f8i7dyge27nmzcwuyq45ym1onj3nuh7fbmqh3lr5tdlepvoc4fwtr15qnxteyd03dwz1tfkagzrh533ixpe0qqf3qmzwi9254f8a90f65lclg3gl7z9db3vsr35s1y9ok3a5utq3j8r45u7y9ws3hfm47rpr3f7x6dzlv5waffxhdl5bjtrjxug3noppk7grxtimx1k04ebbyz64wqvmzg08alcrtt3dmo1pcy4o92qbuafbq5lenat3nue3fq8n7utt69yra9xy56kw370or22gi1u79rroy2kzpkfdbdkdww0jyasqsgti16rnnsir7y6yybdbk106lbmtkqcqyl6j7r4pxofdcqxhagzvc1dswn0ptjvdj404mavpi1pnw6qgbbhwgg0uwxsd3zz414y002jfvmp502tkqsso5zaivqb10pxotstic92hl2sntypk71u0wgo8fn14gl3q4kot0ve466k69gl5o0xib63hnqvf0us457tq2qxuhsmunr7na2v253yp6atqt0e39o0fh4zsjor9thpuxpbmo8hdg9ivwfo9vvlzakqkofhutj0gbx3p8znr39xn06j7r0divj52cnbcphm5tei46wr5yq7ilvebt6vzb92gw5tyu8641pypo208e9ey6h0mrt12sal7j464yfmcx9hhj2tj6mc3eictwousxo682im34dt5i442j6a7uwgj384j6qfoj71hkxsfaatiw4e8z5svyihs9npww2qi9714e1l4m2urbxt9csex7j8slzuxxks30hzm018c5u0w7jcv6dxg5avxw7k4saa6sw6yp4m92n6z76ovr4847hw87pns9ca2y8s9ayaxfy8vic4dya5ghtus294rrjlg1jkrj83wl1md8sx1pluf8zd47zhajztsa5j3kyihqjb6noaz2o72ahpdnj48 == \r\c\6\8\b\h\9\l\7\5\0\7\9\y\s\4\6\4\5\r\7\r\e\g\m\u\s\a\e\k\f\x\6\q\b\c\d\4\4\t\j\8\j\5\8\q\7\u\f\7\g\w\i\w\e\j\3\6\w\q\1\c\a\i\m\7\j\v\a\g\k\4\v\3\v\2\f\8\i\7\d\y\g\e\2\7\n\m\z\c\w\u\y\q\4\5\y\m\1\o\n\j\3\n\u\h\7\f\b\m\q\h\3\l\r\5\t\d\l\e\p\v\o\c\4\f\w\t\r\1\5\q\n\x\t\e\y\d\0\3\d\w\z\1\t\f\k\a\g\z\r\h\5\3\3\i\x\p\e\0\q\q\f\3\q\m\z\w\i\9\2\5\4\f\8\a\9\0\f\6\5\l\c\l\g\3\g\l\7\z\9\d\b\3\v\s\r\3\5\s\1\y\9\o\k\3\a\5\u\t\q\3\j\8\r\4\5\u\7\y\9\w\s\3\h\f\m\4\7\r\p\r\3\f\7\x\6\d\z\l\v\5\w\a\f\f\x\h\d\l\5\b\j\t\r\j\x\u\g\3\n\o\p\p\k\7\g\r\x\t\i\m\x\1\k\0\4\e\b\b\y\z\6\4\w\q\v\m\z\g\0\8\a\l\c\r\t\t\3\d\m\o\1\p\c\y\4\o\9\2\q\b\u\a\f\b\q\5\l\e\n\a\t\3\n\u\e\3\f\q\8\n\7\u\t\t\6\9\y\r\a\9\x\y\5\6\k\w\3\7\0\o\r\2\2\g\i\1\u\7\9\r\r\o\y\2\k\z\p\k\f\d\b\d\k\d\w\w\0\j\y\a\s\q\s\g\t\i\1\6\r\n\n\s\i\r\7\y\6\y\y\b\d\b\k\1\0\6\l\b\m\t\k\q\c\q\y\l\6\j\7\r\4\p\x\o\f\d\c\q\x\h\a\g\z\v\c\1\d\s\w\n\0\p\t\j\v\d\j\4\0\4\m\a\v\p\i\1\p\n\w\6\q\g\b\b\h\w\g\g\0\u\w\x\s\d\3\z\z\4\1\4\y\0\0\2\j\f\v\m\p\5\0\2\t\k\q\s\s\o\5\z\a\i\v\q\b\1\0\p\x\o\t\s\t\i\c\9\2\h\l\2\s\n\t\y\p\k\7\1\u\0\w\g\o\8\f\n\1\4\g\l\3\q\4\k\o\t\0\v\e\4\6\6\k\6\9\g\l\5\o\0\x\i\b\6\3\h\n\q\v\f\0\u\s\4\5\7\t\q\2\q\x\u\h\s\m\u\n\r\7\n\a\2\v\2\5\3\y\p\6\a\t\q\t\0\e\3\9\o\0\f\h\4\z\s\j\o\r\9\t\h\p\u\x\p\b\m\o\8\h\d\g\9\i\v\w\f\o\9\v\v\l\z\a\k\q\k\o\f\h\u\t\j\0\g\b\x\3\p\8\z\n\r\3\9\x\n\0\6\j\7\r\0\d\i\v\j\5\2\c\n\b\c\p\h\m\5\t\e\i\4\6\w\r\5\y\q\7\i\l\v\e\b\t\6\v\z\b\9\2\g\w\5\t\y\u\8\6\4\1\p\y\p\o\2\0\8\e\9\e\y\6\h\0\m\r\t\1\2\s\a\l\7\j\4\6\4\y\f\m\c\x\9\h\h\j\2\t\j\6\m\c\3\e\i\c\t\w\o\u\s\x\o\6\8\2\i\m\3\4\d\t\5\i\4\4\2\j\6\a\7\u\w\g\j\3\8\4\j\6\q\f\o\j\7\1\h\k\x\s\f\a\a\t\i\w\4\e\8\z\5\s\v\y\i\h\s\9\n\p\w\w\2\q\i\9\7\1\4\e\1\l\4\m\2\u\r\b\x\t\9\c\s\e\x\7\j\8\s\l\z\u\x\x\k\s\3\0\h\z\m\0\1\8\c\5\u\0\w\7\j\c\v\6\d\x\g\5\a\v\x\w\7\k\4\s\a\a\6\s\w\6\y\p\4\m\9\2\n\6\z\7\6\o\v\r\4\8\4\7\h\w\8\7\p\n\s\9\c\a\2\y\8\s\9\a\y\a\x\f\y\8\v\i\c\4\d\y\a\5\g\h\t\u\s\2\9\4\r\r\j\l\g\1\j\k\r\j\8\3\w\l\1\m\d\8\s\x\1\p\l\u\f\8\z\d\4\7\z\h\a\j\z\t\s\a\5\j\3\k\y\i\h\q\j\b\6\n\o\a\z\2\o\7\2\a\h\p\d\n\j\4\8 ]] 00:09:05.356 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:05.924 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:05.924 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:05.924 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:05.924 02:53:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:05.924 { 00:09:05.924 "subsystems": [ 00:09:05.924 { 00:09:05.924 "subsystem": "bdev", 00:09:05.924 "config": [ 00:09:05.924 { 00:09:05.924 "params": { 00:09:05.924 "block_size": 512, 00:09:05.924 "num_blocks": 1048576, 00:09:05.924 "name": "malloc0" 00:09:05.924 }, 00:09:05.924 "method": "bdev_malloc_create" 00:09:05.924 }, 00:09:05.924 { 00:09:05.924 "params": { 00:09:05.924 "filename": "/dev/zram1", 00:09:05.924 "name": "uring0" 00:09:05.924 }, 00:09:05.924 "method": "bdev_uring_create" 00:09:05.924 }, 00:09:05.924 { 00:09:05.924 "method": "bdev_wait_for_examine" 00:09:05.924 } 00:09:05.924 ] 00:09:05.924 } 00:09:05.924 ] 00:09:05.924 } 00:09:05.924 [2024-12-05 02:53:36.579885] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:05.924 [2024-12-05 02:53:36.580026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63325 ] 00:09:05.924 [2024-12-05 02:53:36.750972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.183 [2024-12-05 02:53:36.838730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.183 [2024-12-05 02:53:36.986310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.091  [2024-12-05T02:53:39.504Z] Copying: 137/512 [MB] (137 MBps) [2024-12-05T02:53:40.883Z] Copying: 260/512 [MB] (122 MBps) [2024-12-05T02:53:41.451Z] Copying: 397/512 [MB] (137 MBps) [2024-12-05T02:53:43.376Z] Copying: 512/512 [MB] (average 132 MBps) 00:09:12.532 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:12.532 02:53:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.532 { 00:09:12.532 "subsystems": [ 00:09:12.532 { 00:09:12.532 "subsystem": "bdev", 00:09:12.532 "config": [ 00:09:12.532 { 00:09:12.532 "params": { 00:09:12.532 "block_size": 512, 00:09:12.532 "num_blocks": 1048576, 00:09:12.532 "name": "malloc0" 00:09:12.532 }, 00:09:12.532 "method": "bdev_malloc_create" 00:09:12.532 }, 00:09:12.532 { 00:09:12.532 "params": { 00:09:12.532 "filename": "/dev/zram1", 00:09:12.532 "name": "uring0" 00:09:12.532 }, 00:09:12.532 "method": "bdev_uring_create" 00:09:12.532 }, 00:09:12.532 { 00:09:12.532 "params": { 00:09:12.532 "name": "uring0" 00:09:12.532 }, 00:09:12.532 "method": "bdev_uring_delete" 00:09:12.532 }, 00:09:12.532 { 00:09:12.532 "method": "bdev_wait_for_examine" 00:09:12.532 } 00:09:12.532 ] 00:09:12.532 } 00:09:12.532 ] 00:09:12.532 } 00:09:12.532 [2024-12-05 02:53:43.346870] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:12.532 [2024-12-05 02:53:43.347030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63413 ] 00:09:12.790 [2024-12-05 02:53:43.515564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.790 [2024-12-05 02:53:43.597090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.048 [2024-12-05 02:53:43.745631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.632  [2024-12-05T02:53:46.372Z] Copying: 0/0 [B] (average 0 Bps) 00:09:15.528 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:15.528 02:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:15.528 { 00:09:15.528 "subsystems": [ 00:09:15.528 { 00:09:15.528 "subsystem": "bdev", 00:09:15.528 "config": [ 00:09:15.528 { 00:09:15.528 "params": { 00:09:15.528 "block_size": 512, 00:09:15.528 "num_blocks": 1048576, 00:09:15.528 "name": "malloc0" 00:09:15.528 }, 00:09:15.528 "method": "bdev_malloc_create" 00:09:15.528 }, 00:09:15.528 { 00:09:15.528 "params": { 00:09:15.528 "filename": "/dev/zram1", 00:09:15.528 "name": "uring0" 00:09:15.528 }, 00:09:15.528 "method": "bdev_uring_create" 00:09:15.528 }, 00:09:15.528 { 00:09:15.528 "params": { 00:09:15.528 "name": "uring0" 00:09:15.528 }, 00:09:15.528 "method": "bdev_uring_delete" 00:09:15.528 }, 00:09:15.528 { 00:09:15.528 "method": "bdev_wait_for_examine" 00:09:15.528 } 00:09:15.528 ] 00:09:15.528 } 00:09:15.528 ] 00:09:15.528 } 00:09:15.528 [2024-12-05 02:53:46.214121] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:15.528 [2024-12-05 02:53:46.214308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:09:15.786 [2024-12-05 02:53:46.396294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.786 [2024-12-05 02:53:46.494735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.044 [2024-12-05 02:53:46.660193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.610 [2024-12-05 02:53:47.205878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:16.610 [2024-12-05 02:53:47.205965] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:16.610 [2024-12-05 02:53:47.205996] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:09:16.610 [2024-12-05 02:53:47.206023] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.511 [2024-12-05 02:53:48.883982] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:18.511 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:18.511 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.511 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:18.511 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:18.512 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:18.770 00:09:18.770 real 0m28.478s 00:09:18.770 user 0m23.000s 00:09:18.770 sys 0m15.758s 00:09:18.770 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.770 02:53:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 ************************************ 00:09:18.770 END TEST dd_uring_copy 00:09:18.770 ************************************ 00:09:18.770 00:09:18.770 real 0m28.716s 00:09:18.770 user 0m23.126s 00:09:18.770 sys 0m15.872s 00:09:18.770 02:53:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.770 ************************************ 00:09:18.770 02:53:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 END TEST spdk_dd_uring 00:09:18.771 ************************************ 00:09:18.771 02:53:49 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:18.771 02:53:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.771 02:53:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.771 02:53:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 ************************************ 00:09:18.771 START TEST spdk_dd_sparse 00:09:18.771 ************************************ 00:09:18.771 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:19.030 * Looking for test storage... 00:09:19.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.030 00:09:19.030 ' 00:09:19.030 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.031 00:09:19.031 ' 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.031 --rc genhtml_branch_coverage=1 00:09:19.031 --rc genhtml_function_coverage=1 00:09:19.031 --rc genhtml_legend=1 00:09:19.031 --rc geninfo_all_blocks=1 00:09:19.031 --rc geninfo_unexecuted_blocks=1 00:09:19.031 00:09:19.031 ' 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.031 --rc genhtml_branch_coverage=1 00:09:19.031 --rc genhtml_function_coverage=1 00:09:19.031 --rc genhtml_legend=1 00:09:19.031 --rc geninfo_all_blocks=1 00:09:19.031 --rc geninfo_unexecuted_blocks=1 00:09:19.031 00:09:19.031 ' 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:19.031 1+0 records in 00:09:19.031 1+0 records out 00:09:19.031 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00601712 s, 697 MB/s 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:19.031 1+0 records in 00:09:19.031 1+0 records out 00:09:19.031 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00575459 s, 729 MB/s 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:19.031 1+0 records in 00:09:19.031 1+0 records out 00:09:19.031 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00512628 s, 818 MB/s 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 ************************************ 00:09:19.031 START TEST dd_sparse_file_to_file 00:09:19.031 ************************************ 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:19.031 02:53:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 { 00:09:19.031 "subsystems": [ 00:09:19.031 { 00:09:19.031 "subsystem": "bdev", 00:09:19.031 "config": [ 00:09:19.031 { 00:09:19.031 "params": { 00:09:19.031 "block_size": 4096, 00:09:19.031 "filename": "dd_sparse_aio_disk", 00:09:19.031 "name": "dd_aio" 00:09:19.031 }, 00:09:19.031 "method": "bdev_aio_create" 00:09:19.031 }, 00:09:19.031 { 00:09:19.031 "params": { 00:09:19.031 "lvs_name": "dd_lvstore", 00:09:19.031 "bdev_name": "dd_aio" 00:09:19.031 }, 00:09:19.031 "method": "bdev_lvol_create_lvstore" 00:09:19.031 }, 00:09:19.031 { 00:09:19.031 "method": "bdev_wait_for_examine" 00:09:19.031 } 00:09:19.031 ] 00:09:19.031 } 00:09:19.031 ] 00:09:19.031 } 00:09:19.290 [2024-12-05 02:53:49.902888] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:19.290 [2024-12-05 02:53:49.903080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63582 ] 00:09:19.290 [2024-12-05 02:53:50.091901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.548 [2024-12-05 02:53:50.217549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.806 [2024-12-05 02:53:50.409636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.806  [2024-12-05T02:53:52.023Z] Copying: 12/36 [MB] (average 1000 MBps) 00:09:21.179 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:21.179 00:09:21.179 real 0m1.851s 00:09:21.179 user 0m1.535s 00:09:21.179 sys 0m0.983s 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.179 ************************************ 00:09:21.179 END TEST dd_sparse_file_to_file 00:09:21.179 ************************************ 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:21.179 ************************************ 00:09:21.179 START TEST dd_sparse_file_to_bdev 00:09:21.179 ************************************ 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:21.179 02:53:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:21.179 { 00:09:21.179 "subsystems": [ 00:09:21.179 { 00:09:21.179 "subsystem": "bdev", 00:09:21.179 "config": [ 00:09:21.179 { 00:09:21.179 "params": { 00:09:21.179 "block_size": 4096, 00:09:21.179 "filename": "dd_sparse_aio_disk", 00:09:21.179 "name": "dd_aio" 00:09:21.179 }, 00:09:21.179 "method": "bdev_aio_create" 00:09:21.179 }, 00:09:21.179 { 00:09:21.179 "params": { 00:09:21.179 "lvs_name": "dd_lvstore", 00:09:21.179 "lvol_name": "dd_lvol", 00:09:21.179 "size_in_mib": 36, 00:09:21.179 "thin_provision": true 00:09:21.179 }, 00:09:21.179 "method": "bdev_lvol_create" 00:09:21.179 }, 00:09:21.179 { 00:09:21.179 "method": "bdev_wait_for_examine" 00:09:21.179 } 00:09:21.179 ] 00:09:21.179 } 00:09:21.179 ] 00:09:21.179 } 00:09:21.179 [2024-12-05 02:53:51.804011] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:21.179 [2024-12-05 02:53:51.804217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63642 ] 00:09:21.179 [2024-12-05 02:53:51.984304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.436 [2024-12-05 02:53:52.079566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.436 [2024-12-05 02:53:52.253986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.693  [2024-12-05T02:53:53.470Z] Copying: 12/36 [MB] (average 571 MBps) 00:09:22.626 00:09:22.626 00:09:22.626 real 0m1.765s 00:09:22.626 user 0m1.481s 00:09:22.626 sys 0m0.977s 00:09:22.626 ************************************ 00:09:22.626 END TEST dd_sparse_file_to_bdev 00:09:22.626 ************************************ 00:09:22.626 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.626 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:22.884 ************************************ 00:09:22.884 START TEST dd_sparse_bdev_to_file 00:09:22.884 ************************************ 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:22.884 02:53:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:22.884 { 00:09:22.884 "subsystems": [ 00:09:22.884 { 00:09:22.884 "subsystem": "bdev", 00:09:22.884 "config": [ 00:09:22.884 { 00:09:22.884 "params": { 00:09:22.884 "block_size": 4096, 00:09:22.884 "filename": "dd_sparse_aio_disk", 00:09:22.884 "name": "dd_aio" 00:09:22.884 }, 00:09:22.884 "method": "bdev_aio_create" 00:09:22.884 }, 00:09:22.884 { 00:09:22.884 "method": "bdev_wait_for_examine" 00:09:22.884 } 00:09:22.884 ] 00:09:22.884 } 00:09:22.884 ] 00:09:22.884 } 00:09:22.884 [2024-12-05 02:53:53.631605] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:22.884 [2024-12-05 02:53:53.631907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:09:23.142 [2024-12-05 02:53:53.844894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.142 [2024-12-05 02:53:53.969925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.400 [2024-12-05 02:53:54.170128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.659  [2024-12-05T02:53:55.438Z] Copying: 12/36 [MB] (average 1000 MBps) 00:09:24.594 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:24.594 00:09:24.594 real 0m1.843s 00:09:24.594 user 0m1.491s 00:09:24.594 sys 0m1.008s 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.594 ************************************ 00:09:24.594 END TEST dd_sparse_bdev_to_file 00:09:24.594 ************************************ 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:24.594 00:09:24.594 real 0m5.868s 00:09:24.594 user 0m4.689s 00:09:24.594 sys 0m3.188s 00:09:24.594 ************************************ 00:09:24.594 END TEST spdk_dd_sparse 00:09:24.594 ************************************ 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.594 02:53:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 02:53:55 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:24.854 02:53:55 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.854 02:53:55 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.854 02:53:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 ************************************ 00:09:24.854 START TEST spdk_dd_negative 00:09:24.854 ************************************ 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:24.854 * Looking for test storage... 00:09:24.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:24.854 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.113 --rc genhtml_branch_coverage=1 00:09:25.113 --rc genhtml_function_coverage=1 00:09:25.113 --rc genhtml_legend=1 00:09:25.113 --rc geninfo_all_blocks=1 00:09:25.113 --rc geninfo_unexecuted_blocks=1 00:09:25.113 00:09:25.113 ' 00:09:25.113 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.113 --rc genhtml_branch_coverage=1 00:09:25.113 --rc genhtml_function_coverage=1 00:09:25.113 --rc genhtml_legend=1 00:09:25.113 --rc geninfo_all_blocks=1 00:09:25.114 --rc geninfo_unexecuted_blocks=1 00:09:25.114 00:09:25.114 ' 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.114 --rc genhtml_branch_coverage=1 00:09:25.114 --rc genhtml_function_coverage=1 00:09:25.114 --rc genhtml_legend=1 00:09:25.114 --rc geninfo_all_blocks=1 00:09:25.114 --rc geninfo_unexecuted_blocks=1 00:09:25.114 00:09:25.114 ' 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.114 --rc genhtml_branch_coverage=1 00:09:25.114 --rc genhtml_function_coverage=1 00:09:25.114 --rc genhtml_legend=1 00:09:25.114 --rc geninfo_all_blocks=1 00:09:25.114 --rc geninfo_unexecuted_blocks=1 00:09:25.114 00:09:25.114 ' 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.114 ************************************ 00:09:25.114 START TEST dd_invalid_arguments 00:09:25.114 ************************************ 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.114 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:25.114 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:25.114 00:09:25.114 CPU options: 00:09:25.114 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:25.114 (like [0,1,10]) 00:09:25.114 --lcores lcore to CPU mapping list. The list is in the format: 00:09:25.114 [<,lcores[@CPUs]>...] 00:09:25.114 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:25.114 Within the group, '-' is used for range separator, 00:09:25.114 ',' is used for single number separator. 00:09:25.114 '( )' can be omitted for single element group, 00:09:25.114 '@' can be omitted if cpus and lcores have the same value 00:09:25.114 --disable-cpumask-locks Disable CPU core lock files. 00:09:25.114 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:25.114 pollers in the app support interrupt mode) 00:09:25.114 -p, --main-core main (primary) core for DPDK 00:09:25.114 00:09:25.114 Configuration options: 00:09:25.114 -c, --config, --json JSON config file 00:09:25.114 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:25.114 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:25.114 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:25.114 --rpcs-allowed comma-separated list of permitted RPCS 00:09:25.114 --json-ignore-init-errors don't exit on invalid config entry 00:09:25.114 00:09:25.114 Memory options: 00:09:25.114 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:25.114 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:25.114 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:25.114 -R, --huge-unlink unlink huge files after initialization 00:09:25.114 -n, --mem-channels number of memory channels used for DPDK 00:09:25.114 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:25.114 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:25.114 --no-huge run without using hugepages 00:09:25.114 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:25.114 -i, --shm-id shared memory ID (optional) 00:09:25.114 -g, --single-file-segments force creating just one hugetlbfs file 00:09:25.114 00:09:25.114 PCI options: 00:09:25.114 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:25.114 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:25.114 -u, --no-pci disable PCI access 00:09:25.114 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:25.114 00:09:25.114 Log options: 00:09:25.114 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:25.114 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:25.114 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:25.114 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:25.114 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:25.114 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:25.114 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:25.114 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:25.114 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:25.114 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:25.114 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:25.114 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:25.114 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:25.114 --silence-noticelog disable notice level logging to stderr 00:09:25.114 00:09:25.114 Trace options: 00:09:25.114 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:25.114 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:25.114 [2024-12-05 02:53:55.819491] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:09:25.114 setting 0 to disable trace (default 32768) 00:09:25.114 Tracepoints vary in size and can use more than one trace entry. 00:09:25.114 -e, --tpoint-group [:] 00:09:25.114 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:25.115 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:25.115 blob, bdev_raid, scheduler, all). 00:09:25.115 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:25.115 a tracepoint group. First tpoint inside a group can be enabled by 00:09:25.115 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:25.115 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:25.115 in /include/spdk_internal/trace_defs.h 00:09:25.115 00:09:25.115 Other options: 00:09:25.115 -h, --help show this usage 00:09:25.115 -v, --version print SPDK version 00:09:25.115 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:25.115 --env-context Opaque context for use of the env implementation 00:09:25.115 00:09:25.115 Application specific: 00:09:25.115 [--------- DD Options ---------] 00:09:25.115 --if Input file. Must specify either --if or --ib. 00:09:25.115 --ib Input bdev. Must specifier either --if or --ib 00:09:25.115 --of Output file. Must specify either --of or --ob. 00:09:25.115 --ob Output bdev. Must specify either --of or --ob. 00:09:25.115 --iflag Input file flags. 00:09:25.115 --oflag Output file flags. 00:09:25.115 --bs I/O unit size (default: 4096) 00:09:25.115 --qd Queue depth (default: 2) 00:09:25.115 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:25.115 --skip Skip this many I/O units at start of input. (default: 0) 00:09:25.115 --seek Skip this many I/O units at start of output. (default: 0) 00:09:25.115 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:25.115 --sparse Enable hole skipping in input target 00:09:25.115 Available iflag and oflag values: 00:09:25.115 append - append mode 00:09:25.115 direct - use direct I/O for data 00:09:25.115 directory - fail unless a directory 00:09:25.115 dsync - use synchronized I/O for data 00:09:25.115 noatime - do not update access time 00:09:25.115 noctty - do not assign controlling terminal from file 00:09:25.115 nofollow - do not follow symlinks 00:09:25.115 nonblock - use non-blocking I/O 00:09:25.115 sync - use synchronized I/O for data and metadata 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.115 00:09:25.115 real 0m0.143s 00:09:25.115 user 0m0.070s 00:09:25.115 sys 0m0.072s 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:25.115 ************************************ 00:09:25.115 END TEST dd_invalid_arguments 00:09:25.115 ************************************ 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.115 ************************************ 00:09:25.115 START TEST dd_double_input 00:09:25.115 ************************************ 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.115 02:53:55 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:25.375 [2024-12-05 02:53:56.036654] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.375 00:09:25.375 real 0m0.173s 00:09:25.375 user 0m0.090s 00:09:25.375 sys 0m0.081s 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:25.375 ************************************ 00:09:25.375 END TEST dd_double_input 00:09:25.375 ************************************ 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.375 ************************************ 00:09:25.375 START TEST dd_double_output 00:09:25.375 ************************************ 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.375 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.376 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:25.637 [2024-12-05 02:53:56.243329] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.637 00:09:25.637 real 0m0.150s 00:09:25.637 user 0m0.071s 00:09:25.637 sys 0m0.077s 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 ************************************ 00:09:25.637 END TEST dd_double_output 00:09:25.637 ************************************ 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 ************************************ 00:09:25.637 START TEST dd_no_input 00:09:25.637 ************************************ 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.637 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:25.637 [2024-12-05 02:53:56.463211] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.896 00:09:25.896 real 0m0.178s 00:09:25.896 user 0m0.089s 00:09:25.896 sys 0m0.086s 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:25.896 ************************************ 00:09:25.896 END TEST dd_no_input 00:09:25.896 ************************************ 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.896 ************************************ 00:09:25.896 START TEST dd_no_output 00:09:25.896 ************************************ 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.896 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.896 [2024-12-05 02:53:56.694089] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.156 00:09:26.156 real 0m0.175s 00:09:26.156 user 0m0.105s 00:09:26.156 sys 0m0.068s 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:26.156 ************************************ 00:09:26.156 END TEST dd_no_output 00:09:26.156 ************************************ 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:26.156 ************************************ 00:09:26.156 START TEST dd_wrong_blocksize 00:09:26.156 ************************************ 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:26.156 [2024-12-05 02:53:56.895885] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.156 00:09:26.156 real 0m0.146s 00:09:26.156 user 0m0.075s 00:09:26.156 sys 0m0.069s 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.156 ************************************ 00:09:26.156 END TEST dd_wrong_blocksize 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:26.156 ************************************ 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.156 02:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:26.416 ************************************ 00:09:26.416 START TEST dd_smaller_blocksize 00:09:26.416 ************************************ 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:26.416 02:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:26.416 [2024-12-05 02:53:57.125673] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:26.416 [2024-12-05 02:53:57.125858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63942 ] 00:09:26.675 [2024-12-05 02:53:57.307328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.675 [2024-12-05 02:53:57.400363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.934 [2024-12-05 02:53:57.544826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.193 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:27.453 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:27.453 [2024-12-05 02:53:58.170142] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:27.453 [2024-12-05 02:53:58.170261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.021 [2024-12-05 02:53:58.771480] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.281 00:09:28.281 real 0m2.000s 00:09:28.281 user 0m1.289s 00:09:28.281 sys 0m0.598s 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.281 ************************************ 00:09:28.281 END TEST dd_smaller_blocksize 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:28.281 ************************************ 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.281 ************************************ 00:09:28.281 START TEST dd_invalid_count 00:09:28.281 ************************************ 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.281 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:28.540 [2024-12-05 02:53:59.170524] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.540 00:09:28.540 real 0m0.175s 00:09:28.540 user 0m0.095s 00:09:28.540 sys 0m0.078s 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:28.540 ************************************ 00:09:28.540 END TEST dd_invalid_count 00:09:28.540 ************************************ 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.540 ************************************ 00:09:28.540 START TEST dd_invalid_oflag 00:09:28.540 ************************************ 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.540 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.541 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:28.541 [2024-12-05 02:53:59.362445] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.800 00:09:28.800 real 0m0.132s 00:09:28.800 user 0m0.066s 00:09:28.800 sys 0m0.065s 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 ************************************ 00:09:28.800 END TEST dd_invalid_oflag 00:09:28.800 ************************************ 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 ************************************ 00:09:28.800 START TEST dd_invalid_iflag 00:09:28.800 ************************************ 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:28.800 [2024-12-05 02:53:59.570566] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.800 00:09:28.800 real 0m0.170s 00:09:28.800 user 0m0.095s 00:09:28.800 sys 0m0.073s 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.800 02:53:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:28.800 ************************************ 00:09:28.800 END TEST dd_invalid_iflag 00:09:28.800 ************************************ 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.058 ************************************ 00:09:29.058 START TEST dd_unknown_flag 00:09:29.058 ************************************ 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.058 02:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:29.058 [2024-12-05 02:53:59.791287] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:29.058 [2024-12-05 02:53:59.791458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64055 ] 00:09:29.317 [2024-12-05 02:53:59.969640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.317 [2024-12-05 02:54:00.056283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.576 [2024-12-05 02:54:00.213448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.576 [2024-12-05 02:54:00.309250] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:29.576 [2024-12-05 02:54:00.309351] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.576 [2024-12-05 02:54:00.309420] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:29.576 [2024-12-05 02:54:00.309443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.576 [2024-12-05 02:54:00.309736] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:29.576 [2024-12-05 02:54:00.309796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.576 [2024-12-05 02:54:00.309868] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:29.576 [2024-12-05 02:54:00.309890] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:30.142 [2024-12-05 02:54:00.941425] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.400 00:09:30.400 real 0m1.495s 00:09:30.400 user 0m1.209s 00:09:30.400 sys 0m0.181s 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.400 ************************************ 00:09:30.400 END TEST dd_unknown_flag 00:09:30.400 ************************************ 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 ************************************ 00:09:30.400 START TEST dd_invalid_json 00:09:30.400 ************************************ 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.400 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.401 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.401 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:30.658 [2024-12-05 02:54:01.342290] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:30.658 [2024-12-05 02:54:01.342490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64094 ] 00:09:30.916 [2024-12-05 02:54:01.522209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.916 [2024-12-05 02:54:01.622559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.916 [2024-12-05 02:54:01.622655] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:30.916 [2024-12-05 02:54:01.622680] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:30.916 [2024-12-05 02:54:01.622697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.916 [2024-12-05 02:54:01.622805] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.175 00:09:31.175 real 0m0.648s 00:09:31.175 user 0m0.402s 00:09:31.175 sys 0m0.142s 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.175 ************************************ 00:09:31.175 END TEST dd_invalid_json 00:09:31.175 ************************************ 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 ************************************ 00:09:31.175 START TEST dd_invalid_seek 00:09:31.175 ************************************ 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.175 02:54:01 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:31.175 { 00:09:31.175 "subsystems": [ 00:09:31.175 { 00:09:31.175 "subsystem": "bdev", 00:09:31.175 "config": [ 00:09:31.175 { 00:09:31.175 "params": { 00:09:31.175 "block_size": 512, 00:09:31.175 "num_blocks": 512, 00:09:31.175 "name": "malloc0" 00:09:31.175 }, 00:09:31.175 "method": "bdev_malloc_create" 00:09:31.175 }, 00:09:31.175 { 00:09:31.175 "params": { 00:09:31.175 "block_size": 512, 00:09:31.175 "num_blocks": 512, 00:09:31.175 "name": "malloc1" 00:09:31.175 }, 00:09:31.175 "method": "bdev_malloc_create" 00:09:31.175 }, 00:09:31.175 { 00:09:31.175 "method": "bdev_wait_for_examine" 00:09:31.175 } 00:09:31.175 ] 00:09:31.175 } 00:09:31.175 ] 00:09:31.175 } 00:09:31.434 [2024-12-05 02:54:02.038533] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:31.434 [2024-12-05 02:54:02.038707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:09:31.434 [2024-12-05 02:54:02.219437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.692 [2024-12-05 02:54:02.316511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.692 [2024-12-05 02:54:02.478435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.952 [2024-12-05 02:54:02.601868] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:31.952 [2024-12-05 02:54:02.601965] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.521 [2024-12-05 02:54:03.262724] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.781 00:09:32.781 real 0m1.589s 00:09:32.781 user 0m1.314s 00:09:32.781 sys 0m0.220s 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.781 ************************************ 00:09:32.781 END TEST dd_invalid_seek 00:09:32.781 ************************************ 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.781 ************************************ 00:09:32.781 START TEST dd_invalid_skip 00:09:32.781 ************************************ 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.781 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.782 02:54:03 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:33.042 { 00:09:33.042 "subsystems": [ 00:09:33.042 { 00:09:33.042 "subsystem": "bdev", 00:09:33.042 "config": [ 00:09:33.042 { 00:09:33.042 "params": { 00:09:33.042 "block_size": 512, 00:09:33.042 "num_blocks": 512, 00:09:33.042 "name": "malloc0" 00:09:33.042 }, 00:09:33.042 "method": "bdev_malloc_create" 00:09:33.042 }, 00:09:33.042 { 00:09:33.042 "params": { 00:09:33.042 "block_size": 512, 00:09:33.042 "num_blocks": 512, 00:09:33.042 "name": "malloc1" 00:09:33.042 }, 00:09:33.042 "method": "bdev_malloc_create" 00:09:33.042 }, 00:09:33.042 { 00:09:33.042 "method": "bdev_wait_for_examine" 00:09:33.042 } 00:09:33.042 ] 00:09:33.042 } 00:09:33.042 ] 00:09:33.042 } 00:09:33.042 [2024-12-05 02:54:03.689629] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:33.042 [2024-12-05 02:54:03.689848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64177 ] 00:09:33.042 [2024-12-05 02:54:03.872209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.302 [2024-12-05 02:54:03.957182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.302 [2024-12-05 02:54:04.103525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.561 [2024-12-05 02:54:04.215994] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:33.561 [2024-12-05 02:54:04.216096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.131 [2024-12-05 02:54:04.828109] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.390 00:09:34.390 real 0m1.490s 00:09:34.390 user 0m1.244s 00:09:34.390 sys 0m0.197s 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.390 ************************************ 00:09:34.390 END TEST dd_invalid_skip 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 ************************************ 00:09:34.390 START TEST dd_invalid_input_count 00:09:34.390 ************************************ 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.390 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.391 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.391 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.391 02:54:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.391 { 00:09:34.391 "subsystems": [ 00:09:34.391 { 00:09:34.391 "subsystem": "bdev", 00:09:34.391 "config": [ 00:09:34.391 { 00:09:34.391 "params": { 00:09:34.391 "block_size": 512, 00:09:34.391 "num_blocks": 512, 00:09:34.391 "name": "malloc0" 00:09:34.391 }, 00:09:34.391 "method": "bdev_malloc_create" 00:09:34.391 }, 00:09:34.391 { 00:09:34.391 "params": { 00:09:34.391 "block_size": 512, 00:09:34.391 "num_blocks": 512, 00:09:34.391 "name": "malloc1" 00:09:34.391 }, 00:09:34.391 "method": "bdev_malloc_create" 00:09:34.391 }, 00:09:34.391 { 00:09:34.391 "method": "bdev_wait_for_examine" 00:09:34.391 } 00:09:34.391 ] 00:09:34.391 } 00:09:34.391 ] 00:09:34.391 } 00:09:34.649 [2024-12-05 02:54:05.258688] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:34.649 [2024-12-05 02:54:05.258912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64217 ] 00:09:34.649 [2024-12-05 02:54:05.434598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.906 [2024-12-05 02:54:05.530051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.906 [2024-12-05 02:54:05.712699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.164 [2024-12-05 02:54:05.840508] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:35.164 [2024-12-05 02:54:05.840610] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.730 [2024-12-05 02:54:06.476213] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.988 00:09:35.988 real 0m1.586s 00:09:35.988 user 0m1.352s 00:09:35.988 sys 0m0.214s 00:09:35.988 ************************************ 00:09:35.988 END TEST dd_invalid_input_count 00:09:35.988 ************************************ 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.988 ************************************ 00:09:35.988 START TEST dd_invalid_output_count 00:09:35.988 ************************************ 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.988 02:54:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.988 { 00:09:35.988 "subsystems": [ 00:09:35.988 { 00:09:35.988 "subsystem": "bdev", 00:09:35.988 "config": [ 00:09:35.988 { 00:09:35.988 "params": { 00:09:35.988 "block_size": 512, 00:09:35.988 "num_blocks": 512, 00:09:35.988 "name": "malloc0" 00:09:35.988 }, 00:09:35.988 "method": "bdev_malloc_create" 00:09:35.988 }, 00:09:35.988 { 00:09:35.988 "method": "bdev_wait_for_examine" 00:09:35.988 } 00:09:35.988 ] 00:09:35.988 } 00:09:35.988 ] 00:09:35.988 } 00:09:36.247 [2024-12-05 02:54:06.872145] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:36.247 [2024-12-05 02:54:06.872533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64268 ] 00:09:36.247 [2024-12-05 02:54:07.049485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.505 [2024-12-05 02:54:07.143833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.505 [2024-12-05 02:54:07.290876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.763 [2024-12-05 02:54:07.395741] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:36.763 [2024-12-05 02:54:07.395839] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.331 [2024-12-05 02:54:08.022629] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.592 ************************************ 00:09:37.592 END TEST dd_invalid_output_count 00:09:37.592 ************************************ 00:09:37.592 00:09:37.592 real 0m1.514s 00:09:37.592 user 0m1.254s 00:09:37.592 sys 0m0.205s 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.592 ************************************ 00:09:37.592 START TEST dd_bs_not_multiple 00:09:37.592 ************************************ 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.592 02:54:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:37.592 { 00:09:37.592 "subsystems": [ 00:09:37.592 { 00:09:37.592 "subsystem": "bdev", 00:09:37.592 "config": [ 00:09:37.592 { 00:09:37.592 "params": { 00:09:37.592 "block_size": 512, 00:09:37.592 "num_blocks": 512, 00:09:37.592 "name": "malloc0" 00:09:37.592 }, 00:09:37.592 "method": "bdev_malloc_create" 00:09:37.592 }, 00:09:37.592 { 00:09:37.592 "params": { 00:09:37.592 "block_size": 512, 00:09:37.592 "num_blocks": 512, 00:09:37.592 "name": "malloc1" 00:09:37.592 }, 00:09:37.592 "method": "bdev_malloc_create" 00:09:37.592 }, 00:09:37.592 { 00:09:37.592 "method": "bdev_wait_for_examine" 00:09:37.592 } 00:09:37.592 ] 00:09:37.592 } 00:09:37.592 ] 00:09:37.592 } 00:09:37.856 [2024-12-05 02:54:08.443367] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:37.856 [2024-12-05 02:54:08.443858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64306 ] 00:09:37.856 [2024-12-05 02:54:08.625253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.145 [2024-12-05 02:54:08.711336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.145 [2024-12-05 02:54:08.868600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.421 [2024-12-05 02:54:08.985816] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:38.421 [2024-12-05 02:54:08.985904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.989 [2024-12-05 02:54:09.596302] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:38.989 ************************************ 00:09:38.989 END TEST dd_bs_not_multiple 00:09:38.989 ************************************ 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.989 00:09:38.989 real 0m1.494s 00:09:38.989 user 0m1.241s 00:09:38.989 sys 0m0.220s 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.989 02:54:09 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:39.248 ************************************ 00:09:39.248 END TEST spdk_dd_negative 00:09:39.248 ************************************ 00:09:39.248 00:09:39.248 real 0m14.403s 00:09:39.248 user 0m10.507s 00:09:39.248 sys 0m3.260s 00:09:39.248 02:54:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.248 02:54:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.248 ************************************ 00:09:39.248 END TEST spdk_dd 00:09:39.248 ************************************ 00:09:39.248 00:09:39.248 real 2m48.531s 00:09:39.248 user 2m15.033s 00:09:39.248 sys 1m2.887s 00:09:39.248 02:54:09 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.248 02:54:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:39.248 02:54:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:39.248 02:54:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.248 02:54:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.248 02:54:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:39.248 02:54:09 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:39.248 02:54:09 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:39.248 02:54:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.248 02:54:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.248 02:54:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.248 ************************************ 00:09:39.248 START TEST nvmf_tcp 00:09:39.248 ************************************ 00:09:39.248 02:54:10 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:39.248 * Looking for test storage... 00:09:39.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:39.248 02:54:10 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.507 02:54:10 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 02:54:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:39.507 02:54:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:39.507 02:54:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.507 02:54:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.507 ************************************ 00:09:39.507 START TEST nvmf_target_core 00:09:39.507 ************************************ 00:09:39.507 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:39.507 * Looking for test storage... 00:09:39.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:39.507 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.507 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.507 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.767 --rc genhtml_branch_coverage=1 00:09:39.767 --rc genhtml_function_coverage=1 00:09:39.767 --rc genhtml_legend=1 00:09:39.767 --rc geninfo_all_blocks=1 00:09:39.767 --rc geninfo_unexecuted_blocks=1 00:09:39.767 00:09:39.767 ' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.767 --rc genhtml_branch_coverage=1 00:09:39.767 --rc genhtml_function_coverage=1 00:09:39.767 --rc genhtml_legend=1 00:09:39.767 --rc geninfo_all_blocks=1 00:09:39.767 --rc geninfo_unexecuted_blocks=1 00:09:39.767 00:09:39.767 ' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.767 --rc genhtml_branch_coverage=1 00:09:39.767 --rc genhtml_function_coverage=1 00:09:39.767 --rc genhtml_legend=1 00:09:39.767 --rc geninfo_all_blocks=1 00:09:39.767 --rc geninfo_unexecuted_blocks=1 00:09:39.767 00:09:39.767 ' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.767 --rc genhtml_branch_coverage=1 00:09:39.767 --rc genhtml_function_coverage=1 00:09:39.767 --rc genhtml_legend=1 00:09:39.767 --rc geninfo_all_blocks=1 00:09:39.767 --rc geninfo_unexecuted_blocks=1 00:09:39.767 00:09:39.767 ' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.767 02:54:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.768 ************************************ 00:09:39.768 START TEST nvmf_host_management 00:09:39.768 ************************************ 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.768 * Looking for test storage... 00:09:39.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.768 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.028 --rc genhtml_branch_coverage=1 00:09:40.028 --rc genhtml_function_coverage=1 00:09:40.028 --rc genhtml_legend=1 00:09:40.028 --rc geninfo_all_blocks=1 00:09:40.028 --rc geninfo_unexecuted_blocks=1 00:09:40.028 00:09:40.028 ' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.028 --rc genhtml_branch_coverage=1 00:09:40.028 --rc genhtml_function_coverage=1 00:09:40.028 --rc genhtml_legend=1 00:09:40.028 --rc geninfo_all_blocks=1 00:09:40.028 --rc geninfo_unexecuted_blocks=1 00:09:40.028 00:09:40.028 ' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.028 --rc genhtml_branch_coverage=1 00:09:40.028 --rc genhtml_function_coverage=1 00:09:40.028 --rc genhtml_legend=1 00:09:40.028 --rc geninfo_all_blocks=1 00:09:40.028 --rc geninfo_unexecuted_blocks=1 00:09:40.028 00:09:40.028 ' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.028 --rc genhtml_branch_coverage=1 00:09:40.028 --rc genhtml_function_coverage=1 00:09:40.028 --rc genhtml_legend=1 00:09:40.028 --rc geninfo_all_blocks=1 00:09:40.028 --rc geninfo_unexecuted_blocks=1 00:09:40.028 00:09:40.028 ' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.028 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.029 Cannot find device "nvmf_init_br" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.029 Cannot find device "nvmf_init_br2" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.029 Cannot find device "nvmf_tgt_br" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.029 Cannot find device "nvmf_tgt_br2" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.029 Cannot find device "nvmf_init_br" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.029 Cannot find device "nvmf_init_br2" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:40.029 Cannot find device "nvmf_tgt_br" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:40.029 Cannot find device "nvmf_tgt_br2" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:40.029 Cannot find device "nvmf_br" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:40.029 Cannot find device "nvmf_init_if" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:40.029 Cannot find device "nvmf_init_if2" 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.029 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.288 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.288 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.289 02:54:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:40.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:09:40.289 00:09:40.289 --- 10.0.0.3 ping statistics --- 00:09:40.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.289 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:40.289 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:40.289 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:40.289 00:09:40.289 --- 10.0.0.4 ping statistics --- 00:09:40.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.289 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:40.289 00:09:40.289 --- 10.0.0.1 ping statistics --- 00:09:40.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.289 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:40.289 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:40.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:40.548 00:09:40.548 --- 10.0.0.2 ping statistics --- 00:09:40.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.548 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64661 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64661 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64661 ']' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.548 02:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.548 [2024-12-05 02:54:11.282336] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:40.548 [2024-12-05 02:54:11.282505] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.808 [2024-12-05 02:54:11.474739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.808 [2024-12-05 02:54:11.616242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.808 [2024-12-05 02:54:11.616338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.808 [2024-12-05 02:54:11.616375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.808 [2024-12-05 02:54:11.616390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.808 [2024-12-05 02:54:11.616407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.808 [2024-12-05 02:54:11.618792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.808 [2024-12-05 02:54:11.618944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.808 [2024-12-05 02:54:11.619081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.808 [2024-12-05 02:54:11.619101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.067 [2024-12-05 02:54:11.800345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 [2024-12-05 02:54:12.304456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 Malloc0 00:09:41.636 [2024-12-05 02:54:12.420696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64715 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64715 /var/tmp/bdevperf.sock 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64715 ']' 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.636 { 00:09:41.636 "params": { 00:09:41.636 "name": "Nvme$subsystem", 00:09:41.636 "trtype": "$TEST_TRANSPORT", 00:09:41.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.636 "adrfam": "ipv4", 00:09:41.636 "trsvcid": "$NVMF_PORT", 00:09:41.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.636 "hdgst": ${hdgst:-false}, 00:09:41.636 "ddgst": ${ddgst:-false} 00:09:41.636 }, 00:09:41.636 "method": "bdev_nvme_attach_controller" 00:09:41.636 } 00:09:41.636 EOF 00:09:41.636 )") 00:09:41.636 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:41.895 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:41.895 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:41.895 02:54:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.895 "params": { 00:09:41.895 "name": "Nvme0", 00:09:41.895 "trtype": "tcp", 00:09:41.895 "traddr": "10.0.0.3", 00:09:41.895 "adrfam": "ipv4", 00:09:41.895 "trsvcid": "4420", 00:09:41.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:41.895 "hdgst": false, 00:09:41.895 "ddgst": false 00:09:41.895 }, 00:09:41.895 "method": "bdev_nvme_attach_controller" 00:09:41.895 }' 00:09:41.895 [2024-12-05 02:54:12.585898] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:41.895 [2024-12-05 02:54:12.586059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64715 ] 00:09:42.154 [2024-12-05 02:54:12.761063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.154 [2024-12-05 02:54:12.857260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.413 [2024-12-05 02:54:13.037976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.413 Running I/O for 10 seconds... 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.983 [2024-12-05 02:54:13.642026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.642556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.642721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.642862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.642952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.643145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.643302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.643490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.643667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.643866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.644867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.644954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.645925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.645992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 [2024-12-05 02:54:13.646064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.983 [2024-12-05 02:54:13.646138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.983 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.983 [2024-12-05 02:54:13.646221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.646317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.646406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.984 [2024-12-05 02:54:13.646486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.646579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.984 [2024-12-05 02:54:13.646675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.646767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 [2024-12-05 02:54:13.646866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.646958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.647154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.647332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.647498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.647666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.647852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.648900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.649172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.649503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.649673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.649880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.650340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.650501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.650660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.650874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.650965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.651905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.651993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.984 [2024-12-05 02:54:13.652810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.984 [2024-12-05 02:54:13.652827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.985 [2024-12-05 02:54:13.652841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.652857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.985 [2024-12-05 02:54:13.652872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.652889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.985 [2024-12-05 02:54:13.652902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.652918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.985 [2024-12-05 02:54:13.652931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.652946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:09:42.985 [2024-12-05 02:54:13.653350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.985 [2024-12-05 02:54:13.653387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.653406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.985 [2024-12-05 02:54:13.653419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.653434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.985 [2024-12-05 02:54:13.653446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.653461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.985 [2024-12-05 02:54:13.653473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.985 [2024-12-05 02:54:13.653486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:09:42.985 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.985 02:54:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:42.985 [2024-12-05 02:54:13.654779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:42.985 task offset: 71424 on job bdev=Nvme0n1 fails 00:09:42.985 00:09:42.985 Latency(us) 00:09:42.985 [2024-12-05T02:54:13.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:42.985 Job: Nvme0n1 ended in about 0.42 seconds with error 00:09:42.985 Verification LBA range: start 0x0 length 0x400 00:09:42.985 Nvme0n1 : 0.42 1210.89 75.68 151.36 0.00 45441.24 12153.95 50760.61 00:09:42.985 [2024-12-05T02:54:13.829Z] =================================================================================================================== 00:09:42.985 [2024-12-05T02:54:13.829Z] Total : 1210.89 75.68 151.36 0.00 45441.24 12153.95 50760.61 00:09:42.985 [2024-12-05 02:54:13.659981] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.985 [2024-12-05 02:54:13.660038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:09:42.985 [2024-12-05 02:54:13.674568] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64715 00:09:43.921 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64715) - No such process 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.921 { 00:09:43.921 "params": { 00:09:43.921 "name": "Nvme$subsystem", 00:09:43.921 "trtype": "$TEST_TRANSPORT", 00:09:43.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.921 "adrfam": "ipv4", 00:09:43.921 "trsvcid": "$NVMF_PORT", 00:09:43.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.921 "hdgst": ${hdgst:-false}, 00:09:43.921 "ddgst": ${ddgst:-false} 00:09:43.921 }, 00:09:43.921 "method": "bdev_nvme_attach_controller" 00:09:43.921 } 00:09:43.921 EOF 00:09:43.921 )") 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:43.921 02:54:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.921 "params": { 00:09:43.921 "name": "Nvme0", 00:09:43.921 "trtype": "tcp", 00:09:43.921 "traddr": "10.0.0.3", 00:09:43.921 "adrfam": "ipv4", 00:09:43.921 "trsvcid": "4420", 00:09:43.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:43.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:43.921 "hdgst": false, 00:09:43.921 "ddgst": false 00:09:43.921 }, 00:09:43.921 "method": "bdev_nvme_attach_controller" 00:09:43.921 }' 00:09:43.921 [2024-12-05 02:54:14.758710] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:43.921 [2024-12-05 02:54:14.758884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64760 ] 00:09:44.180 [2024-12-05 02:54:14.928718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.437 [2024-12-05 02:54:15.031531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.437 [2024-12-05 02:54:15.216716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.694 Running I/O for 1 seconds... 00:09:45.627 1344.00 IOPS, 84.00 MiB/s 00:09:45.627 Latency(us) 00:09:45.627 [2024-12-05T02:54:16.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.627 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:45.627 Verification LBA range: start 0x0 length 0x400 00:09:45.627 Nvme0n1 : 1.03 1363.76 85.23 0.00 0.00 46042.35 5719.51 41704.73 00:09:45.627 [2024-12-05T02:54:16.471Z] =================================================================================================================== 00:09:45.627 [2024-12-05T02:54:16.471Z] Total : 1363.76 85.23 0.00 0.00 46042.35 5719.51 41704.73 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.000 rmmod nvme_tcp 00:09:47.000 rmmod nvme_fabrics 00:09:47.000 rmmod nvme_keyring 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64661 ']' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64661 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64661 ']' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64661 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64661 00:09:47.000 killing process with pid 64661 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64661' 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64661 00:09:47.000 02:54:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64661 00:09:47.937 [2024-12-05 02:54:18.579360] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:47.937 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.937 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:47.938 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:48.197 00:09:48.197 real 0m8.450s 00:09:48.197 user 0m31.856s 00:09:48.197 sys 0m1.722s 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.197 ************************************ 00:09:48.197 END TEST nvmf_host_management 00:09:48.197 ************************************ 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.197 ************************************ 00:09:48.197 START TEST nvmf_lvol 00:09:48.197 ************************************ 00:09:48.197 02:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:48.197 * Looking for test storage... 00:09:48.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.457 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.458 --rc genhtml_branch_coverage=1 00:09:48.458 --rc genhtml_function_coverage=1 00:09:48.458 --rc genhtml_legend=1 00:09:48.458 --rc geninfo_all_blocks=1 00:09:48.458 --rc geninfo_unexecuted_blocks=1 00:09:48.458 00:09:48.458 ' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.458 --rc genhtml_branch_coverage=1 00:09:48.458 --rc genhtml_function_coverage=1 00:09:48.458 --rc genhtml_legend=1 00:09:48.458 --rc geninfo_all_blocks=1 00:09:48.458 --rc geninfo_unexecuted_blocks=1 00:09:48.458 00:09:48.458 ' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.458 --rc genhtml_branch_coverage=1 00:09:48.458 --rc genhtml_function_coverage=1 00:09:48.458 --rc genhtml_legend=1 00:09:48.458 --rc geninfo_all_blocks=1 00:09:48.458 --rc geninfo_unexecuted_blocks=1 00:09:48.458 00:09:48.458 ' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.458 --rc genhtml_branch_coverage=1 00:09:48.458 --rc genhtml_function_coverage=1 00:09:48.458 --rc genhtml_legend=1 00:09:48.458 --rc geninfo_all_blocks=1 00:09:48.458 --rc geninfo_unexecuted_blocks=1 00:09:48.458 00:09:48.458 ' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:48.458 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:48.459 Cannot find device "nvmf_init_br" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:48.459 Cannot find device "nvmf_init_br2" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:48.459 Cannot find device "nvmf_tgt_br" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:48.459 Cannot find device "nvmf_tgt_br2" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:48.459 Cannot find device "nvmf_init_br" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:48.459 Cannot find device "nvmf_init_br2" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:48.459 Cannot find device "nvmf_tgt_br" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:48.459 Cannot find device "nvmf_tgt_br2" 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:48.459 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:48.718 Cannot find device "nvmf_br" 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:48.718 Cannot find device "nvmf_init_if" 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:48.718 Cannot find device "nvmf_init_if2" 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:48.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:48.718 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:48.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:48.978 00:09:48.978 --- 10.0.0.3 ping statistics --- 00:09:48.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.978 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:48.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:48.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:09:48.978 00:09:48.978 --- 10.0.0.4 ping statistics --- 00:09:48.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.978 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:48.978 00:09:48.978 --- 10.0.0.1 ping statistics --- 00:09:48.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.978 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:48.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:48.978 00:09:48.978 --- 10.0.0.2 ping statistics --- 00:09:48.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.978 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65047 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65047 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65047 ']' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.978 02:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.978 [2024-12-05 02:54:19.752774] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:48.978 [2024-12-05 02:54:19.752931] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.237 [2024-12-05 02:54:19.943259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.237 [2024-12-05 02:54:20.071896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.237 [2024-12-05 02:54:20.071968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.237 [2024-12-05 02:54:20.071994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.237 [2024-12-05 02:54:20.072010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.237 [2024-12-05 02:54:20.072029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.237 [2024-12-05 02:54:20.074049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.237 [2024-12-05 02:54:20.074187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.237 [2024-12-05 02:54:20.074201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.496 [2024-12-05 02:54:20.272789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.063 02:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:50.322 [2024-12-05 02:54:21.087274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.322 02:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.888 02:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:50.888 02:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.146 02:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:51.146 02:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:51.409 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:51.667 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9d322342-ab35-4afa-9c50-79a2576abe5e 00:09:51.667 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9d322342-ab35-4afa-9c50-79a2576abe5e lvol 20 00:09:51.925 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=69bd1d72-1752-4466-8667-a26d34652f13 00:09:51.925 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:52.183 02:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69bd1d72-1752-4466-8667-a26d34652f13 00:09:52.442 02:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:52.699 [2024-12-05 02:54:23.427584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:52.699 02:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:52.957 02:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65128 00:09:52.957 02:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:52.957 02:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:53.892 02:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 69bd1d72-1752-4466-8667-a26d34652f13 MY_SNAPSHOT 00:09:54.150 02:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=59cebb0d-0187-48f6-a3bc-39c07725dcba 00:09:54.150 02:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 69bd1d72-1752-4466-8667-a26d34652f13 30 00:09:54.717 02:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 59cebb0d-0187-48f6-a3bc-39c07725dcba MY_CLONE 00:09:54.974 02:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ec38a180-8bf4-456d-b75e-d79c59bb6ff8 00:09:54.974 02:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ec38a180-8bf4-456d-b75e-d79c59bb6ff8 00:09:55.649 02:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65128 00:10:03.762 Initializing NVMe Controllers 00:10:03.762 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:03.762 Controller IO queue size 128, less than required. 00:10:03.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:03.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:03.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:03.762 Initialization complete. Launching workers. 00:10:03.762 ======================================================== 00:10:03.762 Latency(us) 00:10:03.762 Device Information : IOPS MiB/s Average min max 00:10:03.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9059.60 35.39 14135.50 645.22 160734.81 00:10:03.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9102.80 35.56 14064.28 4740.52 178080.23 00:10:03.762 ======================================================== 00:10:03.762 Total : 18162.40 70.95 14099.81 645.22 178080.23 00:10:03.762 00:10:03.762 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.762 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69bd1d72-1752-4466-8667-a26d34652f13 00:10:04.020 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d322342-ab35-4afa-9c50-79a2576abe5e 00:10:04.277 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:04.277 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:04.277 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:04.277 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.277 02:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.277 rmmod nvme_tcp 00:10:04.277 rmmod nvme_fabrics 00:10:04.277 rmmod nvme_keyring 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65047 ']' 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65047 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65047 ']' 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65047 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:04.277 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65047 00:10:04.278 killing process with pid 65047 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65047' 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65047 00:10:04.278 02:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65047 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:05.650 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:05.909 00:10:05.909 real 0m17.765s 00:10:05.909 user 1m10.379s 00:10:05.909 sys 0m4.110s 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.909 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.909 ************************************ 00:10:05.909 END TEST nvmf_lvol 00:10:05.909 ************************************ 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 ************************************ 00:10:06.168 START TEST nvmf_lvs_grow 00:10:06.168 ************************************ 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:06.168 * Looking for test storage... 00:10:06.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:06.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.168 --rc genhtml_branch_coverage=1 00:10:06.168 --rc genhtml_function_coverage=1 00:10:06.168 --rc genhtml_legend=1 00:10:06.168 --rc geninfo_all_blocks=1 00:10:06.168 --rc geninfo_unexecuted_blocks=1 00:10:06.168 00:10:06.168 ' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:06.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.168 --rc genhtml_branch_coverage=1 00:10:06.168 --rc genhtml_function_coverage=1 00:10:06.168 --rc genhtml_legend=1 00:10:06.168 --rc geninfo_all_blocks=1 00:10:06.168 --rc geninfo_unexecuted_blocks=1 00:10:06.168 00:10:06.168 ' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:06.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.168 --rc genhtml_branch_coverage=1 00:10:06.168 --rc genhtml_function_coverage=1 00:10:06.168 --rc genhtml_legend=1 00:10:06.168 --rc geninfo_all_blocks=1 00:10:06.168 --rc geninfo_unexecuted_blocks=1 00:10:06.168 00:10:06.168 ' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:06.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.168 --rc genhtml_branch_coverage=1 00:10:06.168 --rc genhtml_function_coverage=1 00:10:06.168 --rc genhtml_legend=1 00:10:06.168 --rc geninfo_all_blocks=1 00:10:06.168 --rc geninfo_unexecuted_blocks=1 00:10:06.168 00:10:06.168 ' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.168 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.169 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.169 02:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.169 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:06.427 Cannot find device "nvmf_init_br" 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:06.427 Cannot find device "nvmf_init_br2" 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:06.427 Cannot find device "nvmf_tgt_br" 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.427 Cannot find device "nvmf_tgt_br2" 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:06.427 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:06.427 Cannot find device "nvmf_init_br" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:06.428 Cannot find device "nvmf_init_br2" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:06.428 Cannot find device "nvmf_tgt_br" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:06.428 Cannot find device "nvmf_tgt_br2" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:06.428 Cannot find device "nvmf_br" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:06.428 Cannot find device "nvmf_init_if" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:06.428 Cannot find device "nvmf_init_if2" 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:06.428 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:06.687 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.687 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:10:06.687 00:10:06.687 --- 10.0.0.3 ping statistics --- 00:10:06.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.687 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:06.687 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:06.687 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:06.687 00:10:06.687 --- 10.0.0.4 ping statistics --- 00:10:06.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.687 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:06.687 00:10:06.687 --- 10.0.0.1 ping statistics --- 00:10:06.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.687 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:06.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:06.687 00:10:06.687 --- 10.0.0.2 ping statistics --- 00:10:06.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.687 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65531 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65531 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65531 ']' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.687 02:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 [2024-12-05 02:54:37.525299] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:06.687 [2024-12-05 02:54:37.525457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.946 [2024-12-05 02:54:37.710785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.204 [2024-12-05 02:54:37.806127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.204 [2024-12-05 02:54:37.806199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.204 [2024-12-05 02:54:37.806216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.204 [2024-12-05 02:54:37.806264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.204 [2024-12-05 02:54:37.806282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.204 [2024-12-05 02:54:37.807511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.204 [2024-12-05 02:54:37.965355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.771 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:08.030 [2024-12-05 02:54:38.786993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.030 ************************************ 00:10:08.030 START TEST lvs_grow_clean 00:10:08.030 ************************************ 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:08.030 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:08.031 02:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:08.599 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:08.599 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:08.858 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:08.858 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:08.858 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:09.117 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:09.118 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:09.118 02:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 lvol 150 00:10:09.377 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 00:10:09.377 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:09.377 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:09.637 [2024-12-05 02:54:40.349034] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:09.637 [2024-12-05 02:54:40.349156] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:09.637 true 00:10:09.637 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:09.637 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:09.896 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:09.896 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:10.154 02:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 00:10:10.414 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:10.673 [2024-12-05 02:54:41.353937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:10.673 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65619 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65619 /var/tmp/bdevperf.sock 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 65619 ']' 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.932 02:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:11.191 [2024-12-05 02:54:41.787872] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:11.191 [2024-12-05 02:54:41.788045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65619 ] 00:10:11.191 [2024-12-05 02:54:41.974145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.451 [2024-12-05 02:54:42.099985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.451 [2024-12-05 02:54:42.279317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.019 02:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.019 02:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:12.020 02:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:12.279 Nvme0n1 00:10:12.279 02:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:12.538 [ 00:10:12.538 { 00:10:12.538 "name": "Nvme0n1", 00:10:12.538 "aliases": [ 00:10:12.538 "e7a1e313-cee5-4b5a-bb9b-c77d448b8e39" 00:10:12.538 ], 00:10:12.538 "product_name": "NVMe disk", 00:10:12.538 "block_size": 4096, 00:10:12.538 "num_blocks": 38912, 00:10:12.538 "uuid": "e7a1e313-cee5-4b5a-bb9b-c77d448b8e39", 00:10:12.538 "numa_id": -1, 00:10:12.538 "assigned_rate_limits": { 00:10:12.538 "rw_ios_per_sec": 0, 00:10:12.538 "rw_mbytes_per_sec": 0, 00:10:12.538 "r_mbytes_per_sec": 0, 00:10:12.538 "w_mbytes_per_sec": 0 00:10:12.538 }, 00:10:12.538 "claimed": false, 00:10:12.538 "zoned": false, 00:10:12.538 "supported_io_types": { 00:10:12.538 "read": true, 00:10:12.538 "write": true, 00:10:12.538 "unmap": true, 00:10:12.538 "flush": true, 00:10:12.538 "reset": true, 00:10:12.538 "nvme_admin": true, 00:10:12.539 "nvme_io": true, 00:10:12.539 "nvme_io_md": false, 00:10:12.539 "write_zeroes": true, 00:10:12.539 "zcopy": false, 00:10:12.539 "get_zone_info": false, 00:10:12.539 "zone_management": false, 00:10:12.539 "zone_append": false, 00:10:12.539 "compare": true, 00:10:12.539 "compare_and_write": true, 00:10:12.539 "abort": true, 00:10:12.539 "seek_hole": false, 00:10:12.539 "seek_data": false, 00:10:12.539 "copy": true, 00:10:12.539 "nvme_iov_md": false 00:10:12.539 }, 00:10:12.539 "memory_domains": [ 00:10:12.539 { 00:10:12.539 "dma_device_id": "system", 00:10:12.539 "dma_device_type": 1 00:10:12.539 } 00:10:12.539 ], 00:10:12.539 "driver_specific": { 00:10:12.539 "nvme": [ 00:10:12.539 { 00:10:12.539 "trid": { 00:10:12.539 "trtype": "TCP", 00:10:12.539 "adrfam": "IPv4", 00:10:12.539 "traddr": "10.0.0.3", 00:10:12.539 "trsvcid": "4420", 00:10:12.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:12.539 }, 00:10:12.539 "ctrlr_data": { 00:10:12.539 "cntlid": 1, 00:10:12.539 "vendor_id": "0x8086", 00:10:12.539 "model_number": "SPDK bdev Controller", 00:10:12.539 "serial_number": "SPDK0", 00:10:12.539 "firmware_revision": "25.01", 00:10:12.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.539 "oacs": { 00:10:12.539 "security": 0, 00:10:12.539 "format": 0, 00:10:12.539 "firmware": 0, 00:10:12.539 "ns_manage": 0 00:10:12.539 }, 00:10:12.539 "multi_ctrlr": true, 00:10:12.539 "ana_reporting": false 00:10:12.539 }, 00:10:12.539 "vs": { 00:10:12.539 "nvme_version": "1.3" 00:10:12.539 }, 00:10:12.539 "ns_data": { 00:10:12.539 "id": 1, 00:10:12.539 "can_share": true 00:10:12.539 } 00:10:12.539 } 00:10:12.539 ], 00:10:12.539 "mp_policy": "active_passive" 00:10:12.539 } 00:10:12.539 } 00:10:12.539 ] 00:10:12.539 02:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65643 00:10:12.539 02:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.539 02:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:12.798 Running I/O for 10 seconds... 00:10:13.736 Latency(us) 00:10:13.736 [2024-12-05T02:54:44.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.737 Nvme0n1 : 1.00 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:10:13.737 [2024-12-05T02:54:44.581Z] =================================================================================================================== 00:10:13.737 [2024-12-05T02:54:44.581Z] Total : 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:10:13.737 00:10:14.673 02:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:14.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.673 Nvme0n1 : 2.00 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:10:14.673 [2024-12-05T02:54:45.517Z] =================================================================================================================== 00:10:14.673 [2024-12-05T02:54:45.517Z] Total : 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:10:14.673 00:10:14.931 true 00:10:14.931 02:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:14.931 02:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:15.497 02:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:15.497 02:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:15.497 02:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65643 00:10:15.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.756 Nvme0n1 : 3.00 5619.67 21.95 0.00 0.00 0.00 0.00 0.00 00:10:15.756 [2024-12-05T02:54:46.600Z] =================================================================================================================== 00:10:15.756 [2024-12-05T02:54:46.600Z] Total : 5619.67 21.95 0.00 0.00 0.00 0.00 0.00 00:10:15.756 00:10:16.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.690 Nvme0n1 : 4.00 5707.00 22.29 0.00 0.00 0.00 0.00 0.00 00:10:16.690 [2024-12-05T02:54:47.534Z] =================================================================================================================== 00:10:16.690 [2024-12-05T02:54:47.534Z] Total : 5707.00 22.29 0.00 0.00 0.00 0.00 0.00 00:10:16.690 00:10:17.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.679 Nvme0n1 : 5.00 5734.00 22.40 0.00 0.00 0.00 0.00 0.00 00:10:17.679 [2024-12-05T02:54:48.523Z] =================================================================================================================== 00:10:17.679 [2024-12-05T02:54:48.523Z] Total : 5734.00 22.40 0.00 0.00 0.00 0.00 0.00 00:10:17.679 00:10:18.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.628 Nvme0n1 : 6.00 5773.17 22.55 0.00 0.00 0.00 0.00 0.00 00:10:18.628 [2024-12-05T02:54:49.472Z] =================================================================================================================== 00:10:18.628 [2024-12-05T02:54:49.472Z] Total : 5773.17 22.55 0.00 0.00 0.00 0.00 0.00 00:10:18.628 00:10:20.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.003 Nvme0n1 : 7.00 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:10:20.003 [2024-12-05T02:54:50.847Z] =================================================================================================================== 00:10:20.003 [2024-12-05T02:54:50.847Z] Total : 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:10:20.003 00:10:20.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.938 Nvme0n1 : 8.00 5774.50 22.56 0.00 0.00 0.00 0.00 0.00 00:10:20.938 [2024-12-05T02:54:51.782Z] =================================================================================================================== 00:10:20.938 [2024-12-05T02:54:51.782Z] Total : 5774.50 22.56 0.00 0.00 0.00 0.00 0.00 00:10:20.938 00:10:21.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.874 Nvme0n1 : 9.00 5782.00 22.59 0.00 0.00 0.00 0.00 0.00 00:10:21.874 [2024-12-05T02:54:52.718Z] =================================================================================================================== 00:10:21.874 [2024-12-05T02:54:52.718Z] Total : 5782.00 22.59 0.00 0.00 0.00 0.00 0.00 00:10:21.874 00:10:22.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.816 Nvme0n1 : 10.00 5788.00 22.61 0.00 0.00 0.00 0.00 0.00 00:10:22.816 [2024-12-05T02:54:53.660Z] =================================================================================================================== 00:10:22.816 [2024-12-05T02:54:53.660Z] Total : 5788.00 22.61 0.00 0.00 0.00 0.00 0.00 00:10:22.816 00:10:22.816 00:10:22.816 Latency(us) 00:10:22.816 [2024-12-05T02:54:53.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.816 Nvme0n1 : 10.02 5790.13 22.62 0.00 0.00 22098.80 18826.71 61484.68 00:10:22.816 [2024-12-05T02:54:53.660Z] =================================================================================================================== 00:10:22.816 [2024-12-05T02:54:53.660Z] Total : 5790.13 22.62 0.00 0.00 22098.80 18826.71 61484.68 00:10:22.816 { 00:10:22.816 "results": [ 00:10:22.816 { 00:10:22.816 "job": "Nvme0n1", 00:10:22.816 "core_mask": "0x2", 00:10:22.816 "workload": "randwrite", 00:10:22.816 "status": "finished", 00:10:22.816 "queue_depth": 128, 00:10:22.816 "io_size": 4096, 00:10:22.816 "runtime": 10.018428, 00:10:22.816 "iops": 5790.129948530847, 00:10:22.816 "mibps": 22.617695111448622, 00:10:22.816 "io_failed": 0, 00:10:22.816 "io_timeout": 0, 00:10:22.816 "avg_latency_us": 22098.80148142576, 00:10:22.816 "min_latency_us": 18826.705454545456, 00:10:22.816 "max_latency_us": 61484.68363636364 00:10:22.816 } 00:10:22.816 ], 00:10:22.816 "core_count": 1 00:10:22.816 } 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65619 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 65619 ']' 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 65619 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65619 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65619' 00:10:22.816 killing process with pid 65619 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 65619 00:10:22.816 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.816 00:10:22.816 Latency(us) 00:10:22.816 [2024-12-05T02:54:53.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.816 [2024-12-05T02:54:53.660Z] =================================================================================================================== 00:10:22.816 [2024-12-05T02:54:53.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.816 02:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 65619 00:10:23.751 02:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:24.009 02:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:24.269 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:24.269 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:24.526 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:24.526 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:24.526 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:24.783 [2024-12-05 02:54:55.607626] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:25.041 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:25.300 request: 00:10:25.300 { 00:10:25.300 "uuid": "8a8a948c-8e7c-46a0-9ae3-53e027e05203", 00:10:25.300 "method": "bdev_lvol_get_lvstores", 00:10:25.300 "req_id": 1 00:10:25.300 } 00:10:25.300 Got JSON-RPC error response 00:10:25.300 response: 00:10:25.300 { 00:10:25.300 "code": -19, 00:10:25.300 "message": "No such device" 00:10:25.300 } 00:10:25.300 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:25.300 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.300 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.300 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.300 02:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.559 aio_bdev 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.559 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:25.817 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 -t 2000 00:10:26.077 [ 00:10:26.077 { 00:10:26.077 "name": "e7a1e313-cee5-4b5a-bb9b-c77d448b8e39", 00:10:26.077 "aliases": [ 00:10:26.077 "lvs/lvol" 00:10:26.077 ], 00:10:26.077 "product_name": "Logical Volume", 00:10:26.077 "block_size": 4096, 00:10:26.077 "num_blocks": 38912, 00:10:26.077 "uuid": "e7a1e313-cee5-4b5a-bb9b-c77d448b8e39", 00:10:26.077 "assigned_rate_limits": { 00:10:26.077 "rw_ios_per_sec": 0, 00:10:26.077 "rw_mbytes_per_sec": 0, 00:10:26.077 "r_mbytes_per_sec": 0, 00:10:26.077 "w_mbytes_per_sec": 0 00:10:26.077 }, 00:10:26.077 "claimed": false, 00:10:26.077 "zoned": false, 00:10:26.078 "supported_io_types": { 00:10:26.078 "read": true, 00:10:26.078 "write": true, 00:10:26.078 "unmap": true, 00:10:26.078 "flush": false, 00:10:26.078 "reset": true, 00:10:26.078 "nvme_admin": false, 00:10:26.078 "nvme_io": false, 00:10:26.078 "nvme_io_md": false, 00:10:26.078 "write_zeroes": true, 00:10:26.078 "zcopy": false, 00:10:26.078 "get_zone_info": false, 00:10:26.078 "zone_management": false, 00:10:26.078 "zone_append": false, 00:10:26.078 "compare": false, 00:10:26.078 "compare_and_write": false, 00:10:26.078 "abort": false, 00:10:26.078 "seek_hole": true, 00:10:26.078 "seek_data": true, 00:10:26.078 "copy": false, 00:10:26.078 "nvme_iov_md": false 00:10:26.078 }, 00:10:26.078 "driver_specific": { 00:10:26.078 "lvol": { 00:10:26.078 "lvol_store_uuid": "8a8a948c-8e7c-46a0-9ae3-53e027e05203", 00:10:26.078 "base_bdev": "aio_bdev", 00:10:26.078 "thin_provision": false, 00:10:26.078 "num_allocated_clusters": 38, 00:10:26.078 "snapshot": false, 00:10:26.078 "clone": false, 00:10:26.078 "esnap_clone": false 00:10:26.078 } 00:10:26.078 } 00:10:26.078 } 00:10:26.078 ] 00:10:26.078 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:26.078 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:26.078 02:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:26.337 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:26.338 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:26.338 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:26.616 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:26.616 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e7a1e313-cee5-4b5a-bb9b-c77d448b8e39 00:10:26.875 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a8a948c-8e7c-46a0-9ae3-53e027e05203 00:10:27.134 02:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:27.703 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.962 ************************************ 00:10:27.962 END TEST lvs_grow_clean 00:10:27.962 ************************************ 00:10:27.962 00:10:27.962 real 0m19.755s 00:10:27.962 user 0m18.802s 00:10:27.962 sys 0m2.606s 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:27.962 ************************************ 00:10:27.962 START TEST lvs_grow_dirty 00:10:27.962 ************************************ 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:27.962 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:27.963 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.963 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.963 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.222 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:28.222 02:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:28.481 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:28.481 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:28.481 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:28.741 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:28.741 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:28.741 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ff56ad80-8922-48df-98a8-bdc78d136a83 lvol 150 00:10:29.000 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:29.000 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:29.000 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:29.259 [2024-12-05 02:54:59.912032] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:29.259 [2024-12-05 02:54:59.912160] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:29.259 true 00:10:29.259 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:29.259 02:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:29.517 02:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:29.517 02:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:29.776 02:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:30.035 02:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:30.293 [2024-12-05 02:55:01.056898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:30.293 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:30.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65902 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65902 /var/tmp/bdevperf.sock 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 65902 ']' 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.551 02:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:30.809 [2024-12-05 02:55:01.437130] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:30.809 [2024-12-05 02:55:01.437270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65902 ] 00:10:30.809 [2024-12-05 02:55:01.646090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.068 [2024-12-05 02:55:01.764658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.326 [2024-12-05 02:55:01.941612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.584 02:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.584 02:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:31.584 02:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:32.151 Nvme0n1 00:10:32.151 02:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:32.409 [ 00:10:32.409 { 00:10:32.410 "name": "Nvme0n1", 00:10:32.410 "aliases": [ 00:10:32.410 "8fa39283-ccca-4838-83ab-a67da220aaf1" 00:10:32.410 ], 00:10:32.410 "product_name": "NVMe disk", 00:10:32.410 "block_size": 4096, 00:10:32.410 "num_blocks": 38912, 00:10:32.410 "uuid": "8fa39283-ccca-4838-83ab-a67da220aaf1", 00:10:32.410 "numa_id": -1, 00:10:32.410 "assigned_rate_limits": { 00:10:32.410 "rw_ios_per_sec": 0, 00:10:32.410 "rw_mbytes_per_sec": 0, 00:10:32.410 "r_mbytes_per_sec": 0, 00:10:32.410 "w_mbytes_per_sec": 0 00:10:32.410 }, 00:10:32.410 "claimed": false, 00:10:32.410 "zoned": false, 00:10:32.410 "supported_io_types": { 00:10:32.410 "read": true, 00:10:32.410 "write": true, 00:10:32.410 "unmap": true, 00:10:32.410 "flush": true, 00:10:32.410 "reset": true, 00:10:32.410 "nvme_admin": true, 00:10:32.410 "nvme_io": true, 00:10:32.410 "nvme_io_md": false, 00:10:32.410 "write_zeroes": true, 00:10:32.410 "zcopy": false, 00:10:32.410 "get_zone_info": false, 00:10:32.410 "zone_management": false, 00:10:32.410 "zone_append": false, 00:10:32.410 "compare": true, 00:10:32.410 "compare_and_write": true, 00:10:32.410 "abort": true, 00:10:32.410 "seek_hole": false, 00:10:32.410 "seek_data": false, 00:10:32.410 "copy": true, 00:10:32.410 "nvme_iov_md": false 00:10:32.410 }, 00:10:32.410 "memory_domains": [ 00:10:32.410 { 00:10:32.410 "dma_device_id": "system", 00:10:32.410 "dma_device_type": 1 00:10:32.410 } 00:10:32.410 ], 00:10:32.410 "driver_specific": { 00:10:32.410 "nvme": [ 00:10:32.410 { 00:10:32.410 "trid": { 00:10:32.410 "trtype": "TCP", 00:10:32.410 "adrfam": "IPv4", 00:10:32.410 "traddr": "10.0.0.3", 00:10:32.410 "trsvcid": "4420", 00:10:32.410 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:32.410 }, 00:10:32.410 "ctrlr_data": { 00:10:32.410 "cntlid": 1, 00:10:32.410 "vendor_id": "0x8086", 00:10:32.410 "model_number": "SPDK bdev Controller", 00:10:32.410 "serial_number": "SPDK0", 00:10:32.410 "firmware_revision": "25.01", 00:10:32.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:32.410 "oacs": { 00:10:32.410 "security": 0, 00:10:32.410 "format": 0, 00:10:32.410 "firmware": 0, 00:10:32.410 "ns_manage": 0 00:10:32.410 }, 00:10:32.410 "multi_ctrlr": true, 00:10:32.410 "ana_reporting": false 00:10:32.410 }, 00:10:32.410 "vs": { 00:10:32.410 "nvme_version": "1.3" 00:10:32.410 }, 00:10:32.410 "ns_data": { 00:10:32.410 "id": 1, 00:10:32.410 "can_share": true 00:10:32.410 } 00:10:32.410 } 00:10:32.410 ], 00:10:32.410 "mp_policy": "active_passive" 00:10:32.410 } 00:10:32.410 } 00:10:32.410 ] 00:10:32.410 02:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65930 00:10:32.410 02:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:32.410 02:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:32.410 Running I/O for 10 seconds... 00:10:33.345 Latency(us) 00:10:33.345 [2024-12-05T02:55:04.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.345 Nvme0n1 : 1.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:33.345 [2024-12-05T02:55:04.189Z] =================================================================================================================== 00:10:33.345 [2024-12-05T02:55:04.189Z] Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:33.345 00:10:34.282 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:34.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.541 Nvme0n1 : 2.00 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:10:34.541 [2024-12-05T02:55:05.385Z] =================================================================================================================== 00:10:34.541 [2024-12-05T02:55:05.385Z] Total : 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:10:34.541 00:10:34.541 true 00:10:34.541 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:34.541 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:35.107 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:35.107 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:35.107 02:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65930 00:10:35.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.365 Nvme0n1 : 3.00 5592.33 21.85 0.00 0.00 0.00 0.00 0.00 00:10:35.365 [2024-12-05T02:55:06.209Z] =================================================================================================================== 00:10:35.365 [2024-12-05T02:55:06.209Z] Total : 5592.33 21.85 0.00 0.00 0.00 0.00 0.00 00:10:35.365 00:10:36.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.739 Nvme0n1 : 4.00 5718.25 22.34 0.00 0.00 0.00 0.00 0.00 00:10:36.739 [2024-12-05T02:55:07.583Z] =================================================================================================================== 00:10:36.739 [2024-12-05T02:55:07.583Z] Total : 5718.25 22.34 0.00 0.00 0.00 0.00 0.00 00:10:36.739 00:10:37.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.673 Nvme0n1 : 5.00 5768.40 22.53 0.00 0.00 0.00 0.00 0.00 00:10:37.673 [2024-12-05T02:55:08.517Z] =================================================================================================================== 00:10:37.673 [2024-12-05T02:55:08.517Z] Total : 5768.40 22.53 0.00 0.00 0.00 0.00 0.00 00:10:37.673 00:10:38.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.608 Nvme0n1 : 6.00 5780.67 22.58 0.00 0.00 0.00 0.00 0.00 00:10:38.608 [2024-12-05T02:55:09.452Z] =================================================================================================================== 00:10:38.608 [2024-12-05T02:55:09.452Z] Total : 5780.67 22.58 0.00 0.00 0.00 0.00 0.00 00:10:38.608 00:10:39.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.546 Nvme0n1 : 7.00 5771.29 22.54 0.00 0.00 0.00 0.00 0.00 00:10:39.546 [2024-12-05T02:55:10.390Z] =================================================================================================================== 00:10:39.546 [2024-12-05T02:55:10.390Z] Total : 5771.29 22.54 0.00 0.00 0.00 0.00 0.00 00:10:39.546 00:10:40.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.483 Nvme0n1 : 8.00 5748.38 22.45 0.00 0.00 0.00 0.00 0.00 00:10:40.483 [2024-12-05T02:55:11.327Z] =================================================================================================================== 00:10:40.483 [2024-12-05T02:55:11.327Z] Total : 5748.38 22.45 0.00 0.00 0.00 0.00 0.00 00:10:40.483 00:10:41.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.421 Nvme0n1 : 9.00 5730.56 22.38 0.00 0.00 0.00 0.00 0.00 00:10:41.421 [2024-12-05T02:55:12.265Z] =================================================================================================================== 00:10:41.421 [2024-12-05T02:55:12.265Z] Total : 5730.56 22.38 0.00 0.00 0.00 0.00 0.00 00:10:41.421 00:10:42.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.356 Nvme0n1 : 10.00 5703.60 22.28 0.00 0.00 0.00 0.00 0.00 00:10:42.356 [2024-12-05T02:55:13.200Z] =================================================================================================================== 00:10:42.356 [2024-12-05T02:55:13.200Z] Total : 5703.60 22.28 0.00 0.00 0.00 0.00 0.00 00:10:42.356 00:10:42.356 00:10:42.356 Latency(us) 00:10:42.356 [2024-12-05T02:55:13.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.356 Nvme0n1 : 10.02 5702.26 22.27 0.00 0.00 22439.73 14417.92 196369.69 00:10:42.356 [2024-12-05T02:55:13.200Z] =================================================================================================================== 00:10:42.356 [2024-12-05T02:55:13.200Z] Total : 5702.26 22.27 0.00 0.00 22439.73 14417.92 196369.69 00:10:42.356 { 00:10:42.356 "results": [ 00:10:42.356 { 00:10:42.356 "job": "Nvme0n1", 00:10:42.356 "core_mask": "0x2", 00:10:42.356 "workload": "randwrite", 00:10:42.356 "status": "finished", 00:10:42.356 "queue_depth": 128, 00:10:42.356 "io_size": 4096, 00:10:42.356 "runtime": 10.024793, 00:10:42.356 "iops": 5702.262380879087, 00:10:42.356 "mibps": 22.274462425308933, 00:10:42.356 "io_failed": 0, 00:10:42.356 "io_timeout": 0, 00:10:42.356 "avg_latency_us": 22439.726029223733, 00:10:42.356 "min_latency_us": 14417.92, 00:10:42.356 "max_latency_us": 196369.6872727273 00:10:42.356 } 00:10:42.356 ], 00:10:42.356 "core_count": 1 00:10:42.356 } 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65902 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 65902 ']' 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 65902 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65902 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:42.614 killing process with pid 65902 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65902' 00:10:42.614 Received shutdown signal, test time was about 10.000000 seconds 00:10:42.614 00:10:42.614 Latency(us) 00:10:42.614 [2024-12-05T02:55:13.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.614 [2024-12-05T02:55:13.458Z] =================================================================================================================== 00:10:42.614 [2024-12-05T02:55:13.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 65902 00:10:42.614 02:55:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 65902 00:10:43.549 02:55:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:43.808 02:55:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.067 02:55:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:44.067 02:55:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65531 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65531 00:10:44.326 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65531 Killed "${NVMF_APP[@]}" "$@" 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66075 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66075 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66075 ']' 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.326 02:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.584 [2024-12-05 02:55:15.201434] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:44.584 [2024-12-05 02:55:15.201608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.584 [2024-12-05 02:55:15.391631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.841 [2024-12-05 02:55:15.486768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.841 [2024-12-05 02:55:15.486854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.841 [2024-12-05 02:55:15.486873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.841 [2024-12-05 02:55:15.486895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.841 [2024-12-05 02:55:15.486908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.841 [2024-12-05 02:55:15.487920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.841 [2024-12-05 02:55:15.655225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.406 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:45.664 [2024-12-05 02:55:16.447730] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:45.664 [2024-12-05 02:55:16.448107] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:45.664 [2024-12-05 02:55:16.448345] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.664 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:46.231 02:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa39283-ccca-4838-83ab-a67da220aaf1 -t 2000 00:10:46.231 [ 00:10:46.231 { 00:10:46.231 "name": "8fa39283-ccca-4838-83ab-a67da220aaf1", 00:10:46.231 "aliases": [ 00:10:46.231 "lvs/lvol" 00:10:46.231 ], 00:10:46.231 "product_name": "Logical Volume", 00:10:46.231 "block_size": 4096, 00:10:46.231 "num_blocks": 38912, 00:10:46.231 "uuid": "8fa39283-ccca-4838-83ab-a67da220aaf1", 00:10:46.231 "assigned_rate_limits": { 00:10:46.231 "rw_ios_per_sec": 0, 00:10:46.231 "rw_mbytes_per_sec": 0, 00:10:46.231 "r_mbytes_per_sec": 0, 00:10:46.231 "w_mbytes_per_sec": 0 00:10:46.231 }, 00:10:46.231 "claimed": false, 00:10:46.231 "zoned": false, 00:10:46.231 "supported_io_types": { 00:10:46.231 "read": true, 00:10:46.231 "write": true, 00:10:46.231 "unmap": true, 00:10:46.231 "flush": false, 00:10:46.231 "reset": true, 00:10:46.231 "nvme_admin": false, 00:10:46.231 "nvme_io": false, 00:10:46.231 "nvme_io_md": false, 00:10:46.231 "write_zeroes": true, 00:10:46.231 "zcopy": false, 00:10:46.231 "get_zone_info": false, 00:10:46.231 "zone_management": false, 00:10:46.231 "zone_append": false, 00:10:46.231 "compare": false, 00:10:46.231 "compare_and_write": false, 00:10:46.231 "abort": false, 00:10:46.231 "seek_hole": true, 00:10:46.231 "seek_data": true, 00:10:46.231 "copy": false, 00:10:46.231 "nvme_iov_md": false 00:10:46.231 }, 00:10:46.231 "driver_specific": { 00:10:46.231 "lvol": { 00:10:46.231 "lvol_store_uuid": "ff56ad80-8922-48df-98a8-bdc78d136a83", 00:10:46.231 "base_bdev": "aio_bdev", 00:10:46.231 "thin_provision": false, 00:10:46.231 "num_allocated_clusters": 38, 00:10:46.231 "snapshot": false, 00:10:46.231 "clone": false, 00:10:46.231 "esnap_clone": false 00:10:46.231 } 00:10:46.231 } 00:10:46.231 } 00:10:46.231 ] 00:10:46.231 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:46.490 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:46.490 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:46.750 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:46.750 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:46.750 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:47.009 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:47.009 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.268 [2024-12-05 02:55:17.873390] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.268 02:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:47.527 request: 00:10:47.527 { 00:10:47.527 "uuid": "ff56ad80-8922-48df-98a8-bdc78d136a83", 00:10:47.527 "method": "bdev_lvol_get_lvstores", 00:10:47.527 "req_id": 1 00:10:47.527 } 00:10:47.527 Got JSON-RPC error response 00:10:47.527 response: 00:10:47.527 { 00:10:47.527 "code": -19, 00:10:47.527 "message": "No such device" 00:10:47.527 } 00:10:47.527 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:47.527 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.527 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:47.527 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.527 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.786 aio_bdev 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.786 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:48.045 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa39283-ccca-4838-83ab-a67da220aaf1 -t 2000 00:10:48.305 [ 00:10:48.305 { 00:10:48.305 "name": "8fa39283-ccca-4838-83ab-a67da220aaf1", 00:10:48.305 "aliases": [ 00:10:48.305 "lvs/lvol" 00:10:48.305 ], 00:10:48.305 "product_name": "Logical Volume", 00:10:48.305 "block_size": 4096, 00:10:48.305 "num_blocks": 38912, 00:10:48.305 "uuid": "8fa39283-ccca-4838-83ab-a67da220aaf1", 00:10:48.305 "assigned_rate_limits": { 00:10:48.305 "rw_ios_per_sec": 0, 00:10:48.305 "rw_mbytes_per_sec": 0, 00:10:48.305 "r_mbytes_per_sec": 0, 00:10:48.305 "w_mbytes_per_sec": 0 00:10:48.305 }, 00:10:48.305 "claimed": false, 00:10:48.305 "zoned": false, 00:10:48.305 "supported_io_types": { 00:10:48.305 "read": true, 00:10:48.305 "write": true, 00:10:48.305 "unmap": true, 00:10:48.305 "flush": false, 00:10:48.305 "reset": true, 00:10:48.305 "nvme_admin": false, 00:10:48.305 "nvme_io": false, 00:10:48.305 "nvme_io_md": false, 00:10:48.305 "write_zeroes": true, 00:10:48.305 "zcopy": false, 00:10:48.305 "get_zone_info": false, 00:10:48.305 "zone_management": false, 00:10:48.305 "zone_append": false, 00:10:48.305 "compare": false, 00:10:48.305 "compare_and_write": false, 00:10:48.305 "abort": false, 00:10:48.305 "seek_hole": true, 00:10:48.305 "seek_data": true, 00:10:48.305 "copy": false, 00:10:48.305 "nvme_iov_md": false 00:10:48.305 }, 00:10:48.305 "driver_specific": { 00:10:48.305 "lvol": { 00:10:48.306 "lvol_store_uuid": "ff56ad80-8922-48df-98a8-bdc78d136a83", 00:10:48.306 "base_bdev": "aio_bdev", 00:10:48.306 "thin_provision": false, 00:10:48.306 "num_allocated_clusters": 38, 00:10:48.306 "snapshot": false, 00:10:48.306 "clone": false, 00:10:48.306 "esnap_clone": false 00:10:48.306 } 00:10:48.306 } 00:10:48.306 } 00:10:48.306 ] 00:10:48.306 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:48.306 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:48.306 02:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:48.565 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:48.565 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:48.565 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:48.825 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:48.825 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8fa39283-ccca-4838-83ab-a67da220aaf1 00:10:49.083 02:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff56ad80-8922-48df-98a8-bdc78d136a83 00:10:49.342 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:49.602 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:50.170 00:10:50.170 real 0m22.125s 00:10:50.170 user 0m47.861s 00:10:50.170 sys 0m7.946s 00:10:50.170 ************************************ 00:10:50.170 END TEST lvs_grow_dirty 00:10:50.170 ************************************ 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:50.170 nvmf_trace.0 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:50.170 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.171 rmmod nvme_tcp 00:10:50.171 rmmod nvme_fabrics 00:10:50.171 rmmod nvme_keyring 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66075 ']' 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66075 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66075 ']' 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66075 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.171 02:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66075 00:10:50.429 killing process with pid 66075 00:10:50.429 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.429 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.429 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66075' 00:10:50.429 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66075 00:10:50.429 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66075 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:51.361 02:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:51.361 00:10:51.361 real 0m45.394s 00:10:51.361 user 1m14.175s 00:10:51.361 sys 0m11.448s 00:10:51.361 ************************************ 00:10:51.361 END TEST nvmf_lvs_grow 00:10:51.361 ************************************ 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.361 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.620 02:55:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:51.620 02:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.620 02:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.620 02:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.620 ************************************ 00:10:51.620 START TEST nvmf_bdev_io_wait 00:10:51.620 ************************************ 00:10:51.620 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:51.621 * Looking for test storage... 00:10:51.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.621 --rc genhtml_branch_coverage=1 00:10:51.621 --rc genhtml_function_coverage=1 00:10:51.621 --rc genhtml_legend=1 00:10:51.621 --rc geninfo_all_blocks=1 00:10:51.621 --rc geninfo_unexecuted_blocks=1 00:10:51.621 00:10:51.621 ' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.621 --rc genhtml_branch_coverage=1 00:10:51.621 --rc genhtml_function_coverage=1 00:10:51.621 --rc genhtml_legend=1 00:10:51.621 --rc geninfo_all_blocks=1 00:10:51.621 --rc geninfo_unexecuted_blocks=1 00:10:51.621 00:10:51.621 ' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.621 --rc genhtml_branch_coverage=1 00:10:51.621 --rc genhtml_function_coverage=1 00:10:51.621 --rc genhtml_legend=1 00:10:51.621 --rc geninfo_all_blocks=1 00:10:51.621 --rc geninfo_unexecuted_blocks=1 00:10:51.621 00:10:51.621 ' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.621 --rc genhtml_branch_coverage=1 00:10:51.621 --rc genhtml_function_coverage=1 00:10:51.621 --rc genhtml_legend=1 00:10:51.621 --rc geninfo_all_blocks=1 00:10:51.621 --rc geninfo_unexecuted_blocks=1 00:10:51.621 00:10:51.621 ' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.621 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.622 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:51.881 Cannot find device "nvmf_init_br" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:51.881 Cannot find device "nvmf_init_br2" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:51.881 Cannot find device "nvmf_tgt_br" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.881 Cannot find device "nvmf_tgt_br2" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:51.881 Cannot find device "nvmf_init_br" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:51.881 Cannot find device "nvmf_init_br2" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:51.881 Cannot find device "nvmf_tgt_br" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:51.881 Cannot find device "nvmf_tgt_br2" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:51.881 Cannot find device "nvmf_br" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:51.881 Cannot find device "nvmf_init_if" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:51.881 Cannot find device "nvmf_init_if2" 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.881 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:52.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:10:52.140 00:10:52.140 --- 10.0.0.3 ping statistics --- 00:10:52.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.140 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:52.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:52.140 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:10:52.140 00:10:52.140 --- 10.0.0.4 ping statistics --- 00:10:52.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.140 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:52.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:52.140 00:10:52.140 --- 10.0.0.1 ping statistics --- 00:10:52.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.140 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:52.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:52.140 00:10:52.140 --- 10.0.0.2 ping statistics --- 00:10:52.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.140 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.140 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=66459 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 66459 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 66459 ']' 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.141 02:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.141 [2024-12-05 02:55:22.972797] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:52.141 [2024-12-05 02:55:22.972965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.400 [2024-12-05 02:55:23.155558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.660 [2024-12-05 02:55:23.249073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.660 [2024-12-05 02:55:23.249327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.660 [2024-12-05 02:55:23.249425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.660 [2024-12-05 02:55:23.249535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.660 [2024-12-05 02:55:23.249603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.660 [2024-12-05 02:55:23.251443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.660 [2024-12-05 02:55:23.251566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.660 [2024-12-05 02:55:23.251950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.660 [2024-12-05 02:55:23.252204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.231 02:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.231 02:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:53.231 02:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.231 02:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.231 02:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.231 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 [2024-12-05 02:55:24.196383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 [2024-12-05 02:55:24.217370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 Malloc0 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 [2024-12-05 02:55:24.315526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66500 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66502 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.527 { 00:10:53.527 "params": { 00:10:53.527 "name": "Nvme$subsystem", 00:10:53.527 "trtype": "$TEST_TRANSPORT", 00:10:53.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.527 "adrfam": "ipv4", 00:10:53.527 "trsvcid": "$NVMF_PORT", 00:10:53.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.527 "hdgst": ${hdgst:-false}, 00:10:53.527 "ddgst": ${ddgst:-false} 00:10:53.527 }, 00:10:53.527 "method": "bdev_nvme_attach_controller" 00:10:53.527 } 00:10:53.527 EOF 00:10:53.527 )") 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66504 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.527 { 00:10:53.527 "params": { 00:10:53.527 "name": "Nvme$subsystem", 00:10:53.527 "trtype": "$TEST_TRANSPORT", 00:10:53.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.527 "adrfam": "ipv4", 00:10:53.527 "trsvcid": "$NVMF_PORT", 00:10:53.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.527 "hdgst": ${hdgst:-false}, 00:10:53.527 "ddgst": ${ddgst:-false} 00:10:53.527 }, 00:10:53.527 "method": "bdev_nvme_attach_controller" 00:10:53.527 } 00:10:53.527 EOF 00:10:53.527 )") 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66506 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.527 { 00:10:53.527 "params": { 00:10:53.527 "name": "Nvme$subsystem", 00:10:53.527 "trtype": "$TEST_TRANSPORT", 00:10:53.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.527 "adrfam": "ipv4", 00:10:53.527 "trsvcid": "$NVMF_PORT", 00:10:53.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.527 "hdgst": ${hdgst:-false}, 00:10:53.527 "ddgst": ${ddgst:-false} 00:10:53.527 }, 00:10:53.527 "method": "bdev_nvme_attach_controller" 00:10:53.527 } 00:10:53.527 EOF 00:10:53.527 )") 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.527 { 00:10:53.527 "params": { 00:10:53.527 "name": "Nvme$subsystem", 00:10:53.527 "trtype": "$TEST_TRANSPORT", 00:10:53.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.527 "adrfam": "ipv4", 00:10:53.527 "trsvcid": "$NVMF_PORT", 00:10:53.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.527 "hdgst": ${hdgst:-false}, 00:10:53.527 "ddgst": ${ddgst:-false} 00:10:53.527 }, 00:10:53.527 "method": "bdev_nvme_attach_controller" 00:10:53.527 } 00:10:53.527 EOF 00:10:53.527 )") 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:53.527 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.528 "params": { 00:10:53.528 "name": "Nvme1", 00:10:53.528 "trtype": "tcp", 00:10:53.528 "traddr": "10.0.0.3", 00:10:53.528 "adrfam": "ipv4", 00:10:53.528 "trsvcid": "4420", 00:10:53.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.528 "hdgst": false, 00:10:53.528 "ddgst": false 00:10:53.528 }, 00:10:53.528 "method": "bdev_nvme_attach_controller" 00:10:53.528 }' 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:53.528 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.528 "params": { 00:10:53.528 "name": "Nvme1", 00:10:53.528 "trtype": "tcp", 00:10:53.528 "traddr": "10.0.0.3", 00:10:53.528 "adrfam": "ipv4", 00:10:53.528 "trsvcid": "4420", 00:10:53.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.528 "hdgst": false, 00:10:53.528 "ddgst": false 00:10:53.528 }, 00:10:53.528 "method": "bdev_nvme_attach_controller" 00:10:53.528 }' 00:10:53.839 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:53.839 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.839 "params": { 00:10:53.839 "name": "Nvme1", 00:10:53.839 "trtype": "tcp", 00:10:53.839 "traddr": "10.0.0.3", 00:10:53.839 "adrfam": "ipv4", 00:10:53.839 "trsvcid": "4420", 00:10:53.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.839 "hdgst": false, 00:10:53.839 "ddgst": false 00:10:53.839 }, 00:10:53.839 "method": "bdev_nvme_attach_controller" 00:10:53.839 }' 00:10:53.839 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:53.839 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.839 "params": { 00:10:53.839 "name": "Nvme1", 00:10:53.839 "trtype": "tcp", 00:10:53.839 "traddr": "10.0.0.3", 00:10:53.839 "adrfam": "ipv4", 00:10:53.839 "trsvcid": "4420", 00:10:53.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.839 "hdgst": false, 00:10:53.839 "ddgst": false 00:10:53.839 }, 00:10:53.839 "method": "bdev_nvme_attach_controller" 00:10:53.839 }' 00:10:53.839 02:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66500 00:10:53.839 [2024-12-05 02:55:24.436700] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:53.839 [2024-12-05 02:55:24.436868] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:53.839 [2024-12-05 02:55:24.440010] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:53.839 [2024-12-05 02:55:24.440308] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:53.839 [2024-12-05 02:55:24.467875] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:53.839 [2024-12-05 02:55:24.468032] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:53.839 [2024-12-05 02:55:24.474124] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:53.839 [2024-12-05 02:55:24.474276] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:54.106 [2024-12-05 02:55:24.665737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.106 [2024-12-05 02:55:24.708365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.106 [2024-12-05 02:55:24.752001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.106 [2024-12-05 02:55:24.783077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.106 [2024-12-05 02:55:24.795124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.106 [2024-12-05 02:55:24.832364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.106 [2024-12-05 02:55:24.867882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.106 [2024-12-05 02:55:24.890011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:54.365 [2024-12-05 02:55:24.954519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.365 [2024-12-05 02:55:25.000091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.365 [2024-12-05 02:55:25.054155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.365 [2024-12-05 02:55:25.054681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.365 Running I/O for 1 seconds... 00:10:54.365 Running I/O for 1 seconds... 00:10:54.623 Running I/O for 1 seconds... 00:10:54.623 Running I/O for 1 seconds... 00:10:55.556 134792.00 IOPS, 526.53 MiB/s 00:10:55.556 Latency(us) 00:10:55.556 [2024-12-05T02:55:26.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.556 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:55.556 Nvme1n1 : 1.00 134456.33 525.22 0.00 0.00 947.02 644.19 2457.60 00:10:55.556 [2024-12-05T02:55:26.401Z] =================================================================================================================== 00:10:55.557 [2024-12-05T02:55:26.401Z] Total : 134456.33 525.22 0.00 0.00 947.02 644.19 2457.60 00:10:55.557 4848.00 IOPS, 18.94 MiB/s 00:10:55.557 Latency(us) 00:10:55.557 [2024-12-05T02:55:26.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.557 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:55.557 Nvme1n1 : 1.03 4819.29 18.83 0.00 0.00 26098.60 4736.47 41704.73 00:10:55.557 [2024-12-05T02:55:26.401Z] =================================================================================================================== 00:10:55.557 [2024-12-05T02:55:26.401Z] Total : 4819.29 18.83 0.00 0.00 26098.60 4736.47 41704.73 00:10:55.557 6498.00 IOPS, 25.38 MiB/s [2024-12-05T02:55:26.401Z] 4724.00 IOPS, 18.45 MiB/s 00:10:55.557 Latency(us) 00:10:55.557 [2024-12-05T02:55:26.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.557 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:55.557 Nvme1n1 : 1.02 6531.36 25.51 0.00 0.00 19448.10 8996.31 28955.00 00:10:55.557 [2024-12-05T02:55:26.401Z] =================================================================================================================== 00:10:55.557 [2024-12-05T02:55:26.401Z] Total : 6531.36 25.51 0.00 0.00 19448.10 8996.31 28955.00 00:10:55.557 00:10:55.557 Latency(us) 00:10:55.557 [2024-12-05T02:55:26.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.557 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:55.557 Nvme1n1 : 1.01 4853.46 18.96 0.00 0.00 26270.62 6881.28 51475.55 00:10:55.557 [2024-12-05T02:55:26.401Z] =================================================================================================================== 00:10:55.557 [2024-12-05T02:55:26.401Z] Total : 4853.46 18.96 0.00 0.00 26270.62 6881.28 51475.55 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66502 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66504 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66506 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.123 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.123 rmmod nvme_tcp 00:10:56.123 rmmod nvme_fabrics 00:10:56.381 rmmod nvme_keyring 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 66459 ']' 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 66459 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 66459 ']' 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 66459 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.381 02:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66459 00:10:56.381 killing process with pid 66459 00:10:56.381 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.381 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.381 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66459' 00:10:56.381 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 66459 00:10:56.381 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 66459 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:57.317 02:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:57.317 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:57.317 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:57.317 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:57.318 00:10:57.318 real 0m5.898s 00:10:57.318 user 0m25.315s 00:10:57.318 sys 0m2.584s 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.318 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 ************************************ 00:10:57.318 END TEST nvmf_bdev_io_wait 00:10:57.318 ************************************ 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.578 ************************************ 00:10:57.578 START TEST nvmf_queue_depth 00:10:57.578 ************************************ 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:57.578 * Looking for test storage... 00:10:57.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.578 --rc genhtml_branch_coverage=1 00:10:57.578 --rc genhtml_function_coverage=1 00:10:57.578 --rc genhtml_legend=1 00:10:57.578 --rc geninfo_all_blocks=1 00:10:57.578 --rc geninfo_unexecuted_blocks=1 00:10:57.578 00:10:57.578 ' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.578 --rc genhtml_branch_coverage=1 00:10:57.578 --rc genhtml_function_coverage=1 00:10:57.578 --rc genhtml_legend=1 00:10:57.578 --rc geninfo_all_blocks=1 00:10:57.578 --rc geninfo_unexecuted_blocks=1 00:10:57.578 00:10:57.578 ' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.578 --rc genhtml_branch_coverage=1 00:10:57.578 --rc genhtml_function_coverage=1 00:10:57.578 --rc genhtml_legend=1 00:10:57.578 --rc geninfo_all_blocks=1 00:10:57.578 --rc geninfo_unexecuted_blocks=1 00:10:57.578 00:10:57.578 ' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.578 --rc genhtml_branch_coverage=1 00:10:57.578 --rc genhtml_function_coverage=1 00:10:57.578 --rc genhtml_legend=1 00:10:57.578 --rc geninfo_all_blocks=1 00:10:57.578 --rc geninfo_unexecuted_blocks=1 00:10:57.578 00:10:57.578 ' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:57.578 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.579 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:57.838 Cannot find device "nvmf_init_br" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:57.838 Cannot find device "nvmf_init_br2" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:57.838 Cannot find device "nvmf_tgt_br" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.838 Cannot find device "nvmf_tgt_br2" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:57.838 Cannot find device "nvmf_init_br" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:57.838 Cannot find device "nvmf_init_br2" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:57.838 Cannot find device "nvmf_tgt_br" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:57.838 Cannot find device "nvmf_tgt_br2" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:57.838 Cannot find device "nvmf_br" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:57.838 Cannot find device "nvmf_init_if" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:57.838 Cannot find device "nvmf_init_if2" 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.838 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:57.839 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:58.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:10:58.099 00:10:58.099 --- 10.0.0.3 ping statistics --- 00:10:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.099 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:58.099 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:58.099 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:58.099 00:10:58.099 --- 10.0.0.4 ping statistics --- 00:10:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.099 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:58.099 00:10:58.099 --- 10.0.0.1 ping statistics --- 00:10:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.099 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:58.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:10:58.099 00:10:58.099 --- 10.0.0.2 ping statistics --- 00:10:58.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.099 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=66814 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 66814 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66814 ']' 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.099 02:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.358 [2024-12-05 02:55:28.965825] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:58.358 [2024-12-05 02:55:28.965990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.358 [2024-12-05 02:55:29.161943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.618 [2024-12-05 02:55:29.289361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.618 [2024-12-05 02:55:29.289432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.618 [2024-12-05 02:55:29.289455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.618 [2024-12-05 02:55:29.289482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.618 [2024-12-05 02:55:29.289498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.618 [2024-12-05 02:55:29.290938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.877 [2024-12-05 02:55:29.508534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.136 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.136 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:59.136 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.136 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.136 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.395 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.395 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.395 02:55:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 [2024-12-05 02:55:30.003404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 Malloc0 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 [2024-12-05 02:55:30.102703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66847 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66847 /var/tmp/bdevperf.sock 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66847 ']' 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.395 02:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.395 [2024-12-05 02:55:30.222941] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:59.395 [2024-12-05 02:55:30.223112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66847 ] 00:10:59.655 [2024-12-05 02:55:30.413039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.914 [2024-12-05 02:55:30.537992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.914 [2024-12-05 02:55:30.739605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:00.482 NVMe0n1 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.482 02:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:00.741 Running I/O for 10 seconds... 00:11:02.613 6036.00 IOPS, 23.58 MiB/s [2024-12-05T02:55:34.836Z] 6147.50 IOPS, 24.01 MiB/s [2024-12-05T02:55:35.771Z] 6403.00 IOPS, 25.01 MiB/s [2024-12-05T02:55:36.707Z] 6405.50 IOPS, 25.02 MiB/s [2024-12-05T02:55:37.642Z] 6478.80 IOPS, 25.31 MiB/s [2024-12-05T02:55:38.579Z] 6485.33 IOPS, 25.33 MiB/s [2024-12-05T02:55:39.514Z] 6509.00 IOPS, 25.43 MiB/s [2024-12-05T02:55:40.487Z] 6528.88 IOPS, 25.50 MiB/s [2024-12-05T02:55:41.422Z] 6569.89 IOPS, 25.66 MiB/s [2024-12-05T02:55:41.681Z] 6564.60 IOPS, 25.64 MiB/s 00:11:10.837 Latency(us) 00:11:10.837 [2024-12-05T02:55:41.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.837 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:10.837 Verification LBA range: start 0x0 length 0x4000 00:11:10.837 NVMe0n1 : 10.08 6612.17 25.83 0.00 0.00 154113.67 5510.98 103904.35 00:11:10.837 [2024-12-05T02:55:41.681Z] =================================================================================================================== 00:11:10.837 [2024-12-05T02:55:41.681Z] Total : 6612.17 25.83 0.00 0.00 154113.67 5510.98 103904.35 00:11:10.837 { 00:11:10.837 "results": [ 00:11:10.837 { 00:11:10.837 "job": "NVMe0n1", 00:11:10.837 "core_mask": "0x1", 00:11:10.837 "workload": "verify", 00:11:10.837 "status": "finished", 00:11:10.837 "verify_range": { 00:11:10.837 "start": 0, 00:11:10.837 "length": 16384 00:11:10.837 }, 00:11:10.837 "queue_depth": 1024, 00:11:10.837 "io_size": 4096, 00:11:10.837 "runtime": 10.079287, 00:11:10.837 "iops": 6612.174055565637, 00:11:10.837 "mibps": 25.82880490455327, 00:11:10.837 "io_failed": 0, 00:11:10.837 "io_timeout": 0, 00:11:10.837 "avg_latency_us": 154113.66984245117, 00:11:10.837 "min_latency_us": 5510.981818181818, 00:11:10.837 "max_latency_us": 103904.34909090909 00:11:10.837 } 00:11:10.837 ], 00:11:10.837 "core_count": 1 00:11:10.837 } 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66847 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66847 ']' 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66847 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66847 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.837 killing process with pid 66847 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66847' 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66847 00:11:10.837 Received shutdown signal, test time was about 10.000000 seconds 00:11:10.837 00:11:10.837 Latency(us) 00:11:10.837 [2024-12-05T02:55:41.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.837 [2024-12-05T02:55:41.681Z] =================================================================================================================== 00:11:10.837 [2024-12-05T02:55:41.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:10.837 02:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66847 00:11:11.404 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:11.404 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:11.404 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.404 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.662 rmmod nvme_tcp 00:11:11.662 rmmod nvme_fabrics 00:11:11.662 rmmod nvme_keyring 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 66814 ']' 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 66814 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66814 ']' 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66814 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66814 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:11.662 killing process with pid 66814 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66814' 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66814 00:11:11.662 02:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66814 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:12.598 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:12.857 00:11:12.857 real 0m15.432s 00:11:12.857 user 0m25.828s 00:11:12.857 sys 0m2.395s 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 ************************************ 00:11:12.857 END TEST nvmf_queue_depth 00:11:12.857 ************************************ 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.857 02:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.858 ************************************ 00:11:12.858 START TEST nvmf_target_multipath 00:11:12.858 ************************************ 00:11:12.858 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:13.117 * Looking for test storage... 00:11:13.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.117 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.118 --rc genhtml_branch_coverage=1 00:11:13.118 --rc genhtml_function_coverage=1 00:11:13.118 --rc genhtml_legend=1 00:11:13.118 --rc geninfo_all_blocks=1 00:11:13.118 --rc geninfo_unexecuted_blocks=1 00:11:13.118 00:11:13.118 ' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.118 --rc genhtml_branch_coverage=1 00:11:13.118 --rc genhtml_function_coverage=1 00:11:13.118 --rc genhtml_legend=1 00:11:13.118 --rc geninfo_all_blocks=1 00:11:13.118 --rc geninfo_unexecuted_blocks=1 00:11:13.118 00:11:13.118 ' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.118 --rc genhtml_branch_coverage=1 00:11:13.118 --rc genhtml_function_coverage=1 00:11:13.118 --rc genhtml_legend=1 00:11:13.118 --rc geninfo_all_blocks=1 00:11:13.118 --rc geninfo_unexecuted_blocks=1 00:11:13.118 00:11:13.118 ' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.118 --rc genhtml_branch_coverage=1 00:11:13.118 --rc genhtml_function_coverage=1 00:11:13.118 --rc genhtml_legend=1 00:11:13.118 --rc geninfo_all_blocks=1 00:11:13.118 --rc geninfo_unexecuted_blocks=1 00:11:13.118 00:11:13.118 ' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:13.118 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:13.119 Cannot find device "nvmf_init_br" 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:13.119 Cannot find device "nvmf_init_br2" 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:13.119 Cannot find device "nvmf_tgt_br" 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.119 Cannot find device "nvmf_tgt_br2" 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:13.119 Cannot find device "nvmf_init_br" 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:13.119 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:13.394 Cannot find device "nvmf_init_br2" 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:13.394 Cannot find device "nvmf_tgt_br" 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:13.394 Cannot find device "nvmf_tgt_br2" 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:13.394 02:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:13.394 Cannot find device "nvmf_br" 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:13.394 Cannot find device "nvmf_init_if" 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:13.394 Cannot find device "nvmf_init_if2" 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.394 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:13.395 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:13.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:13.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:13.654 00:11:13.654 --- 10.0.0.3 ping statistics --- 00:11:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.654 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:13.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:13.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:13.654 00:11:13.654 --- 10.0.0.4 ping statistics --- 00:11:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.654 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:13.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:13.654 00:11:13.654 --- 10.0.0.1 ping statistics --- 00:11:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.654 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:13.654 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:13.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:13.654 00:11:13.655 --- 10.0.0.2 ping statistics --- 00:11:13.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.655 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67237 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67237 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67237 ']' 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.655 02:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:13.655 [2024-12-05 02:55:44.450208] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:13.655 [2024-12-05 02:55:44.450389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.914 [2024-12-05 02:55:44.638059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.173 [2024-12-05 02:55:44.768670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.173 [2024-12-05 02:55:44.768781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.173 [2024-12-05 02:55:44.768808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.173 [2024-12-05 02:55:44.768823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.173 [2024-12-05 02:55:44.768839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.173 [2024-12-05 02:55:44.771067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.173 [2024-12-05 02:55:44.771239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.173 [2024-12-05 02:55:44.771361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.173 [2024-12-05 02:55:44.771453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.173 [2024-12-05 02:55:44.996025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.738 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:14.996 [2024-12-05 02:55:45.688518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.996 02:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:15.255 Malloc0 00:11:15.255 02:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:15.514 02:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.772 02:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.032 [2024-12-05 02:55:46.762082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.032 02:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:16.291 [2024-12-05 02:55:46.998356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:16.291 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.550 02:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:19.084 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67328 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:19.085 02:55:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:19.085 [global] 00:11:19.085 thread=1 00:11:19.085 invalidate=1 00:11:19.085 rw=randrw 00:11:19.085 time_based=1 00:11:19.085 runtime=6 00:11:19.085 ioengine=libaio 00:11:19.085 direct=1 00:11:19.085 bs=4096 00:11:19.085 iodepth=128 00:11:19.085 norandommap=0 00:11:19.085 numjobs=1 00:11:19.085 00:11:19.085 verify_dump=1 00:11:19.085 verify_backlog=512 00:11:19.085 verify_state_save=0 00:11:19.085 do_verify=1 00:11:19.085 verify=crc32c-intel 00:11:19.085 [job0] 00:11:19.085 filename=/dev/nvme0n1 00:11:19.085 Could not set queue depth (nvme0n1) 00:11:19.085 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.085 fio-3.35 00:11:19.085 Starting 1 thread 00:11:19.652 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:19.911 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:20.169 02:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:20.427 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:20.686 02:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67328 00:11:24.872 00:11:24.872 job0: (groupid=0, jobs=1): err= 0: pid=67349: Thu Dec 5 02:55:55 2024 00:11:24.872 read: IOPS=8815, BW=34.4MiB/s (36.1MB/s)(207MiB/6007msec) 00:11:24.872 slat (usec): min=7, max=9841, avg=69.15, stdev=270.95 00:11:24.872 clat (usec): min=1855, max=22967, avg=9967.68, stdev=1728.77 00:11:24.872 lat (usec): min=1884, max=22991, avg=10036.83, stdev=1732.59 00:11:24.872 clat percentiles (usec): 00:11:24.872 | 1.00th=[ 5211], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9110], 00:11:24.872 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:11:24.872 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[13960], 00:11:24.872 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17171], 99.95th=[17433], 00:11:24.872 | 99.99th=[17695] 00:11:24.872 bw ( KiB/s): min= 4016, max=23680, per=50.12%, avg=17672.00, stdev=6510.01, samples=12 00:11:24.872 iops : min= 1004, max= 5920, avg=4418.00, stdev=1627.50, samples=12 00:11:24.872 write: IOPS=5235, BW=20.5MiB/s (21.4MB/s)(104MiB/5085msec); 0 zone resets 00:11:24.872 slat (usec): min=16, max=3144, avg=76.51, stdev=200.45 00:11:24.872 clat (usec): min=1714, max=17155, avg=8796.61, stdev=1586.59 00:11:24.872 lat (usec): min=1741, max=17179, avg=8873.12, stdev=1592.67 00:11:24.872 clat percentiles (usec): 00:11:24.872 | 1.00th=[ 3884], 5.00th=[ 5145], 10.00th=[ 7242], 20.00th=[ 8094], 00:11:24.872 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:11:24.872 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10683], 00:11:24.872 | 99.00th=[13566], 99.50th=[14353], 99.90th=[15795], 99.95th=[16057], 00:11:24.872 | 99.99th=[16909] 00:11:24.872 bw ( KiB/s): min= 4320, max=23136, per=84.57%, avg=17711.33, stdev=6310.01, samples=12 00:11:24.872 iops : min= 1080, max= 5784, avg=4427.83, stdev=1577.50, samples=12 00:11:24.872 lat (msec) : 2=0.02%, 4=0.50%, 10=68.18%, 20=31.31%, 50=0.01% 00:11:24.872 cpu : usr=4.96%, sys=18.96%, ctx=4672, majf=0, minf=108 00:11:24.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:24.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.872 issued rwts: total=52952,26624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.872 00:11:24.872 Run status group 0 (all jobs): 00:11:24.872 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=207MiB (217MB), run=6007-6007msec 00:11:24.872 WRITE: bw=20.5MiB/s (21.4MB/s), 20.5MiB/s-20.5MiB/s (21.4MB/s-21.4MB/s), io=104MiB (109MB), run=5085-5085msec 00:11:24.872 00:11:24.872 Disk stats (read/write): 00:11:24.872 nvme0n1: ios=52174/26104, merge=0/0, ticks=501089/216880, in_queue=717969, util=98.60% 00:11:24.872 02:55:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:25.448 02:55:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67434 00:11:25.448 02:55:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:25.728 [global] 00:11:25.728 thread=1 00:11:25.728 invalidate=1 00:11:25.728 rw=randrw 00:11:25.728 time_based=1 00:11:25.728 runtime=6 00:11:25.728 ioengine=libaio 00:11:25.728 direct=1 00:11:25.728 bs=4096 00:11:25.728 iodepth=128 00:11:25.728 norandommap=0 00:11:25.728 numjobs=1 00:11:25.728 00:11:25.728 verify_dump=1 00:11:25.728 verify_backlog=512 00:11:25.728 verify_state_save=0 00:11:25.728 do_verify=1 00:11:25.728 verify=crc32c-intel 00:11:25.728 [job0] 00:11:25.728 filename=/dev/nvme0n1 00:11:25.728 Could not set queue depth (nvme0n1) 00:11:25.728 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:25.728 fio-3.35 00:11:25.728 Starting 1 thread 00:11:26.666 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:26.923 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:27.181 02:55:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:27.439 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:27.698 02:55:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67434 00:11:31.879 00:11:31.879 job0: (groupid=0, jobs=1): err= 0: pid=67459: Thu Dec 5 02:56:02 2024 00:11:31.879 read: IOPS=9807, BW=38.3MiB/s (40.2MB/s)(230MiB/6008msec) 00:11:31.879 slat (usec): min=6, max=9105, avg=50.95, stdev=229.17 00:11:31.879 clat (usec): min=1028, max=20881, avg=8974.81, stdev=2489.02 00:11:31.879 lat (usec): min=1037, max=20917, avg=9025.76, stdev=2508.15 00:11:31.879 clat percentiles (usec): 00:11:31.879 | 1.00th=[ 3163], 5.00th=[ 4228], 10.00th=[ 5342], 20.00th=[ 6849], 00:11:31.879 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:11:31.879 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[13042], 00:11:31.879 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16319], 99.95th=[16712], 00:11:31.879 | 99.99th=[16909] 00:11:31.879 bw ( KiB/s): min= 7880, max=34160, per=52.18%, avg=20472.67, stdev=7632.92, samples=12 00:11:31.879 iops : min= 1970, max= 8540, avg=5118.17, stdev=1908.23, samples=12 00:11:31.879 write: IOPS=5777, BW=22.6MiB/s (23.7MB/s)(120MiB/5327msec); 0 zone resets 00:11:31.879 slat (usec): min=15, max=4140, avg=62.26, stdev=170.72 00:11:31.879 clat (usec): min=1166, max=16260, avg=7605.96, stdev=2348.85 00:11:31.879 lat (usec): min=1193, max=17013, avg=7668.21, stdev=2369.61 00:11:31.879 clat percentiles (usec): 00:11:31.879 | 1.00th=[ 2769], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 5014], 00:11:31.879 | 30.00th=[ 5866], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 8848], 00:11:31.879 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:11:31.879 | 99.00th=[12911], 99.50th=[13698], 99.90th=[15008], 99.95th=[15533], 00:11:31.879 | 99.99th=[16188] 00:11:31.879 bw ( KiB/s): min= 8064, max=35064, per=88.62%, avg=20478.67, stdev=7520.20, samples=12 00:11:31.879 iops : min= 2016, max= 8766, avg=5119.67, stdev=1880.05, samples=12 00:11:31.879 lat (msec) : 2=0.13%, 4=5.19%, 10=69.01%, 20=25.67%, 50=0.01% 00:11:31.879 cpu : usr=5.71%, sys=20.59%, ctx=5071, majf=0, minf=90 00:11:31.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:31.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.879 issued rwts: total=58923,30775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.879 00:11:31.879 Run status group 0 (all jobs): 00:11:31.879 READ: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=230MiB (241MB), run=6008-6008msec 00:11:31.879 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=120MiB (126MB), run=5327-5327msec 00:11:31.879 00:11:31.879 Disk stats (read/write): 00:11:31.879 nvme0n1: ios=58425/30062, merge=0/0, ticks=503506/214119, in_queue=717625, util=98.63% 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:31.879 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.136 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:32.395 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:32.395 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:32.395 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:32.395 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.395 02:56:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.395 rmmod nvme_tcp 00:11:32.395 rmmod nvme_fabrics 00:11:32.395 rmmod nvme_keyring 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67237 ']' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67237 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67237 ']' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67237 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67237 00:11:32.395 killing process with pid 67237 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67237' 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67237 00:11:32.395 02:56:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67237 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:33.327 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:33.585 00:11:33.585 real 0m20.754s 00:11:33.585 user 1m15.867s 00:11:33.585 sys 0m9.399s 00:11:33.585 ************************************ 00:11:33.585 END TEST nvmf_target_multipath 00:11:33.585 ************************************ 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.585 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:33.844 02:56:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:33.844 02:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.844 02:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.844 02:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.844 ************************************ 00:11:33.844 START TEST nvmf_zcopy 00:11:33.844 ************************************ 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:33.845 * Looking for test storage... 00:11:33.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.845 --rc genhtml_branch_coverage=1 00:11:33.845 --rc genhtml_function_coverage=1 00:11:33.845 --rc genhtml_legend=1 00:11:33.845 --rc geninfo_all_blocks=1 00:11:33.845 --rc geninfo_unexecuted_blocks=1 00:11:33.845 00:11:33.845 ' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.845 --rc genhtml_branch_coverage=1 00:11:33.845 --rc genhtml_function_coverage=1 00:11:33.845 --rc genhtml_legend=1 00:11:33.845 --rc geninfo_all_blocks=1 00:11:33.845 --rc geninfo_unexecuted_blocks=1 00:11:33.845 00:11:33.845 ' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.845 --rc genhtml_branch_coverage=1 00:11:33.845 --rc genhtml_function_coverage=1 00:11:33.845 --rc genhtml_legend=1 00:11:33.845 --rc geninfo_all_blocks=1 00:11:33.845 --rc geninfo_unexecuted_blocks=1 00:11:33.845 00:11:33.845 ' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.845 --rc genhtml_branch_coverage=1 00:11:33.845 --rc genhtml_function_coverage=1 00:11:33.845 --rc genhtml_legend=1 00:11:33.845 --rc geninfo_all_blocks=1 00:11:33.845 --rc geninfo_unexecuted_blocks=1 00:11:33.845 00:11:33.845 ' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:33.845 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.104 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.104 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:34.105 Cannot find device "nvmf_init_br" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:34.105 Cannot find device "nvmf_init_br2" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:34.105 Cannot find device "nvmf_tgt_br" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.105 Cannot find device "nvmf_tgt_br2" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:34.105 Cannot find device "nvmf_init_br" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:34.105 Cannot find device "nvmf_init_br2" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:34.105 Cannot find device "nvmf_tgt_br" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:34.105 Cannot find device "nvmf_tgt_br2" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:34.105 Cannot find device "nvmf_br" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:34.105 Cannot find device "nvmf_init_if" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:34.105 Cannot find device "nvmf_init_if2" 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:34.105 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.365 02:56:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:34.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:34.365 00:11:34.365 --- 10.0.0.3 ping statistics --- 00:11:34.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.365 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:34.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:34.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:34.365 00:11:34.365 --- 10.0.0.4 ping statistics --- 00:11:34.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.365 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:34.365 00:11:34.365 --- 10.0.0.1 ping statistics --- 00:11:34.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.365 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:34.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:34.365 00:11:34.365 --- 10.0.0.2 ping statistics --- 00:11:34.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.365 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=67770 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 67770 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 67770 ']' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.365 02:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.623 [2024-12-05 02:56:05.257063] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:34.623 [2024-12-05 02:56:05.257230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.623 [2024-12-05 02:56:05.448884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.881 [2024-12-05 02:56:05.575383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.881 [2024-12-05 02:56:05.575463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.881 [2024-12-05 02:56:05.575500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.881 [2024-12-05 02:56:05.575530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.881 [2024-12-05 02:56:05.575547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.881 [2024-12-05 02:56:05.577036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.140 [2024-12-05 02:56:05.794731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.399 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.399 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:35.399 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.399 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.399 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.656 [2024-12-05 02:56:06.277417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.656 [2024-12-05 02:56:06.293610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:35.656 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.657 malloc0 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.657 { 00:11:35.657 "params": { 00:11:35.657 "name": "Nvme$subsystem", 00:11:35.657 "trtype": "$TEST_TRANSPORT", 00:11:35.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.657 "adrfam": "ipv4", 00:11:35.657 "trsvcid": "$NVMF_PORT", 00:11:35.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.657 "hdgst": ${hdgst:-false}, 00:11:35.657 "ddgst": ${ddgst:-false} 00:11:35.657 }, 00:11:35.657 "method": "bdev_nvme_attach_controller" 00:11:35.657 } 00:11:35.657 EOF 00:11:35.657 )") 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:35.657 02:56:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.657 "params": { 00:11:35.657 "name": "Nvme1", 00:11:35.657 "trtype": "tcp", 00:11:35.657 "traddr": "10.0.0.3", 00:11:35.657 "adrfam": "ipv4", 00:11:35.657 "trsvcid": "4420", 00:11:35.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.657 "hdgst": false, 00:11:35.657 "ddgst": false 00:11:35.657 }, 00:11:35.657 "method": "bdev_nvme_attach_controller" 00:11:35.657 }' 00:11:35.657 [2024-12-05 02:56:06.467344] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:35.657 [2024-12-05 02:56:06.467517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67803 ] 00:11:35.914 [2024-12-05 02:56:06.653300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.172 [2024-12-05 02:56:06.777725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.172 [2024-12-05 02:56:06.968300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.430 Running I/O for 10 seconds... 00:11:38.740 5092.00 IOPS, 39.78 MiB/s [2024-12-05T02:56:10.522Z] 5096.50 IOPS, 39.82 MiB/s [2024-12-05T02:56:11.459Z] 5125.67 IOPS, 40.04 MiB/s [2024-12-05T02:56:12.461Z] 5149.00 IOPS, 40.23 MiB/s [2024-12-05T02:56:13.398Z] 5164.80 IOPS, 40.35 MiB/s [2024-12-05T02:56:14.333Z] 5196.50 IOPS, 40.60 MiB/s [2024-12-05T02:56:15.271Z] 5218.86 IOPS, 40.77 MiB/s [2024-12-05T02:56:16.205Z] 5234.88 IOPS, 40.90 MiB/s [2024-12-05T02:56:17.585Z] 5248.22 IOPS, 41.00 MiB/s [2024-12-05T02:56:17.585Z] 5258.30 IOPS, 41.08 MiB/s 00:11:46.741 Latency(us) 00:11:46.741 [2024-12-05T02:56:17.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:46.741 Verification LBA range: start 0x0 length 0x1000 00:11:46.741 Nvme1n1 : 10.02 5260.42 41.10 0.00 0.00 24263.89 3425.75 33840.41 00:11:46.741 [2024-12-05T02:56:17.585Z] =================================================================================================================== 00:11:46.741 [2024-12-05T02:56:17.585Z] Total : 5260.42 41.10 0.00 0.00 24263.89 3425.75 33840.41 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67932 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:47.307 { 00:11:47.307 "params": { 00:11:47.307 "name": "Nvme$subsystem", 00:11:47.307 "trtype": "$TEST_TRANSPORT", 00:11:47.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:47.307 "adrfam": "ipv4", 00:11:47.307 "trsvcid": "$NVMF_PORT", 00:11:47.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:47.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:47.307 "hdgst": ${hdgst:-false}, 00:11:47.307 "ddgst": ${ddgst:-false} 00:11:47.307 }, 00:11:47.307 "method": "bdev_nvme_attach_controller" 00:11:47.307 } 00:11:47.307 EOF 00:11:47.307 )") 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:47.307 [2024-12-05 02:56:18.067885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.067945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:47.307 02:56:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:47.307 "params": { 00:11:47.307 "name": "Nvme1", 00:11:47.307 "trtype": "tcp", 00:11:47.307 "traddr": "10.0.0.3", 00:11:47.307 "adrfam": "ipv4", 00:11:47.307 "trsvcid": "4420", 00:11:47.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:47.307 "hdgst": false, 00:11:47.307 "ddgst": false 00:11:47.307 }, 00:11:47.307 "method": "bdev_nvme_attach_controller" 00:11:47.307 }' 00:11:47.307 [2024-12-05 02:56:18.079825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.080073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 [2024-12-05 02:56:18.091743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.091821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 [2024-12-05 02:56:18.103712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.103984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 [2024-12-05 02:56:18.115744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.115997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 [2024-12-05 02:56:18.127733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.127993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.307 [2024-12-05 02:56:18.139735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.307 [2024-12-05 02:56:18.139981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.151783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.152080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.163798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.164004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.175740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.175997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.179140] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:47.565 [2024-12-05 02:56:18.179486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67932 ] 00:11:47.565 [2024-12-05 02:56:18.187759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.188023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.199746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.199998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.211783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.211991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.223784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.224052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.235792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.236009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.247831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.248037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.259848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.260046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.271802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.272039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.283818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.284014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.295901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.296221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.307879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.307917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.319830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.319894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.331814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.331850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.343849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.343923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.355844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.355897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.362722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.565 [2024-12-05 02:56:18.367859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.367907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.379954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.380010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.391860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.391899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.565 [2024-12-05 02:56:18.403974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.565 [2024-12-05 02:56:18.404014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.415865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.415923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.427852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.428047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.439884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.439941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.451899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.451935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.458892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.824 [2024-12-05 02:56:18.463857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.463896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.475948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.476002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.487901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.487961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.499885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.499920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.511934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.512010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.524023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.524104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.535979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.536058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.547907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.547944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.559910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.559966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.571980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.572313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.583956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.584032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.595936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.596133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.607930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.607989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.619946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.619988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.631352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:47.824 [2024-12-05 02:56:18.631977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.632014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.644001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.644281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.824 [2024-12-05 02:56:18.655948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.824 [2024-12-05 02:56:18.656009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.668023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.668067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.680017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.680100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.692047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.692112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.703982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.704217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.716054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.716095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.727970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.728011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.740007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.740045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.752015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.752056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.764017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.764075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.776009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.776050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.788036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.788079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.800039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.800080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.812161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.812239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.824218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.824274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 Running I/O for 5 seconds... 00:11:48.083 [2024-12-05 02:56:18.843084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.843140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.858746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.858813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.875836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.875893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.891046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.891093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.907264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.907328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.083 [2024-12-05 02:56:18.923650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.083 [2024-12-05 02:56:18.923796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:18.940738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:18.940829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:18.958062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:18.958149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:18.973390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:18.973448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:18.989475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:18.989554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.004100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.004173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.020221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.020313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.037348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.037406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.052724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.052814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.062744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.062808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.079324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.079383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.095217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.095286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.112645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.112705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.129326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.129382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.144577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.144636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.159680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.159736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.342 [2024-12-05 02:56:19.175721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.342 [2024-12-05 02:56:19.175811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.191525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.191603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.208022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.208092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.219546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.219603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.235386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.235468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.252173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.252233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.268982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.269030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.285112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.285183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.296634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.296710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.313531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.313592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.329328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.329403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.345737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.345820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.362720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.362817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.378604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.378665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.391857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.391920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.409874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.409931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.425371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.425433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.601 [2024-12-05 02:56:19.441879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.601 [2024-12-05 02:56:19.441936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.459189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.459270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.474542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.474647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.491597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.491659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.508978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.509021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.521440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.521500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.537532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.537588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.553937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.553988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.571713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.571797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.588319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.588368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.605229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.605286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.621798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.621886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.636894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.636956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.653887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.653946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.668093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.668148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.684837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.684881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.860 [2024-12-05 02:56:19.700168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.860 [2024-12-05 02:56:19.700256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.716015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.716079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.732227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.732317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.748881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.748936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.766060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.766124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.782534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.782624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.799721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.799792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.816009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.816056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 9874.00 IOPS, 77.14 MiB/s [2024-12-05T02:56:19.962Z] [2024-12-05 02:56:19.832706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.832797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.849019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.849082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.864490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.864547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.880856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.880901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.896892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.896936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.913402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.913461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.929473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.929529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.118 [2024-12-05 02:56:19.947248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.118 [2024-12-05 02:56:19.947307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:19.961981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:19.962024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:19.979271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:19.979350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:19.995737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:19.995822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.012342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.012421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.028537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.028599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.044965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.045012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.061954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.061997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.079224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.079284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.094219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.094312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.111236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.111296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.126794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.126848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.143371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.143449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.376 [2024-12-05 02:56:20.159507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.376 [2024-12-05 02:56:20.159564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.377 [2024-12-05 02:56:20.175592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.377 [2024-12-05 02:56:20.175653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.377 [2024-12-05 02:56:20.186939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.377 [2024-12-05 02:56:20.186982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.377 [2024-12-05 02:56:20.203872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.377 [2024-12-05 02:56:20.203934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.219646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.219705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.236923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.237000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.249440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.249514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.266031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.266121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.283423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.283482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.298695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.298751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.315680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.315738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.331112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.331167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.347608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.347665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.364338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.364394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.381227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.381286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.397976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.398024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.414446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.414510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.431570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.431638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.448935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.448993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.464709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.464792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.635 [2024-12-05 02:56:20.477096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.635 [2024-12-05 02:56:20.477158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.492350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.492424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.508375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.508433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.523904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.523946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.540338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.540396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.556888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.556932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.572853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.572898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.588214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.588271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.604856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.604899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.620958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.621002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.632348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.632406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.648654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.648710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.664010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.664065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.674675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.674727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.691791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.691892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.708223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.708280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.895 [2024-12-05 02:56:20.722574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.895 [2024-12-05 02:56:20.722641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.739369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.739443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.756514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.756571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.773928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.773991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.788492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.788569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.804104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.804178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.814826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.814883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 9802.00 IOPS, 76.58 MiB/s [2024-12-05T02:56:20.998Z] [2024-12-05 02:56:20.832550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.832608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.847367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.847425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.859954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.860001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.878892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.878975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.895510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.895584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.911693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.911750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.154 [2024-12-05 02:56:20.927389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.154 [2024-12-05 02:56:20.927447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.155 [2024-12-05 02:56:20.938354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.155 [2024-12-05 02:56:20.938399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.155 [2024-12-05 02:56:20.954638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.155 [2024-12-05 02:56:20.954693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.155 [2024-12-05 02:56:20.969448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.155 [2024-12-05 02:56:20.969504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.155 [2024-12-05 02:56:20.986313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.155 [2024-12-05 02:56:20.986356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.002242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.002315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.017446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.017503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.028929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.028971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.044457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.044514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.059227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.059285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.075709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.075797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.088694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.088779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.105277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.105326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.119551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.119593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.135467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.135509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.152561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.152603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.168750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.168854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.186368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.186574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.202057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.202100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.414 [2024-12-05 02:56:21.217820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.414 [2024-12-05 02:56:21.217862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.415 [2024-12-05 02:56:21.234938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.415 [2024-12-05 02:56:21.234978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.415 [2024-12-05 02:56:21.251202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.415 [2024-12-05 02:56:21.251244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.267561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.267604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.277600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.277641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.294148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.294202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.310811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.310926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.327082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.327315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.343401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.343443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.354023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.354095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.370031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.370090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.385016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.385243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.400519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.400728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.417319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.417362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.434163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.434207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.446813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.446885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.464703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.464822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.479952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.479993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.496085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.496126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.675 [2024-12-05 02:56:21.512772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.675 [2024-12-05 02:56:21.512864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.527753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.527822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.544801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.544842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.561380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.561422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.577747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.577814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.594149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.594373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.611316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.611526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.627786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.628027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.644638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.644859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.660693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.660928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.672414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.672601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.687968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.688260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.703686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.703923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.720528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.720716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.736062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.736265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.752339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.752526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.934 [2024-12-05 02:56:21.769418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.934 [2024-12-05 02:56:21.769605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.785574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.785806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.801984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.802188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.818886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.819074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 9880.67 IOPS, 77.19 MiB/s [2024-12-05T02:56:22.038Z] [2024-12-05 02:56:21.835881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.836183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.851003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.851268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.868077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.868280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.884162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.884221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.896446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.896492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.912746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.913014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.928429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.928616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.939842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.939896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.956091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.956151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.971787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.971996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:21.988549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:21.988592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:22.005748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:22.005832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.194 [2024-12-05 02:56:22.022026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.194 [2024-12-05 02:56:22.022070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.038981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.039216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.055895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.055942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.072910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.072969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.090561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.090649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.106950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.106991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.124054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.124111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.139876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.139933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.156145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.156203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.172933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.172976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.190556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.190639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.203307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.203384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.222134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.222192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.236512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.236574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.254487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.254536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.270886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.270935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.454 [2024-12-05 02:56:22.286671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.454 [2024-12-05 02:56:22.286742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.298519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.298571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.314141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.314200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.330220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.330318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.347395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.347455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.362705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.362748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.377805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.377844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.393926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.393968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.410978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.411036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.427457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.427514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.444241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.444298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.461238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.461285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.474993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.475041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.492007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.492053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.507842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.507900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.519532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.519608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.536124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.536180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.714 [2024-12-05 02:56:22.551987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.714 [2024-12-05 02:56:22.552045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.973 [2024-12-05 02:56:22.567458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.567517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.582940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.582983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.599450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.599506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.615362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.615419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.632496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.632556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.649232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.649304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.664519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.664576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.680938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.680981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.697759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.697881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.713660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.713783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.726632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.726720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.744946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.745003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.760801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.760853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.777988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.778047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.794115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.794162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.974 [2024-12-05 02:56:22.812369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.974 [2024-12-05 02:56:22.812432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 9850.25 IOPS, 76.96 MiB/s [2024-12-05T02:56:23.078Z] [2024-12-05 02:56:22.828801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.828872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.844473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.844531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.855234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.855291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.870659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.870730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.886973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.887032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.907055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.907117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.923373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.923437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.938815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.938885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.953696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.953782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.969325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.969382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.981363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.981439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:22.998001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:22.998059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:23.013123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:23.013195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:23.024999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:23.025043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:23.041510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:23.041568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:23.056935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:23.056979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.234 [2024-12-05 02:56:23.069429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.234 [2024-12-05 02:56:23.069484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.084924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.084975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.101757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.101816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.117552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.117613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.129550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.129609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.146211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.146297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.161602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.161647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.173669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.173729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.187660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.187718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.203605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.493 [2024-12-05 02:56:23.203663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.493 [2024-12-05 02:56:23.220866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.220910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.237906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.237950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.253887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.253930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.264609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.264668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.281568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.281628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.297437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.297513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.308606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.308683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.494 [2024-12-05 02:56:23.325324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.494 [2024-12-05 02:56:23.325412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.342212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.342322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.357752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.357820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.374085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.374143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.390935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.390978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.407609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.407666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.423274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.423331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.440109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.440180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.455892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.455945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.472427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.472470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.488390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.488449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.501217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.501275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.518499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.518545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.534851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.534908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.753 [2024-12-05 02:56:23.552173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.753 [2024-12-05 02:56:23.552230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.754 [2024-12-05 02:56:23.567759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.754 [2024-12-05 02:56:23.567847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.754 [2024-12-05 02:56:23.579257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.754 [2024-12-05 02:56:23.579315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.596680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.596755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.612770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.612841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.628288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.628346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.643857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.643899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.660062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.660121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.675835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.675880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.691672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.691730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.708553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.708611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.725316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.725374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.741613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.741671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.759073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.759117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.776002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.776045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.791299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.791356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.807835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.807881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 [2024-12-05 02:56:23.822802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.822854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 9811.40 IOPS, 76.65 MiB/s [2024-12-05T02:56:23.857Z] [2024-12-05 02:56:23.834833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.013 [2024-12-05 02:56:23.834913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.013 00:11:53.013 Latency(us) 00:11:53.014 [2024-12-05T02:56:23.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.014 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:53.014 Nvme1n1 : 5.01 9817.86 76.70 0.00 0.00 13022.12 5272.67 22639.71 00:11:53.014 [2024-12-05T02:56:23.858Z] =================================================================================================================== 00:11:53.014 [2024-12-05T02:56:23.858Z] Total : 9817.86 76.70 0.00 0.00 13022.12 5272.67 22639.71 00:11:53.014 [2024-12-05 02:56:23.846760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.014 [2024-12-05 02:56:23.846827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.858852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.858897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.870763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.870825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.882859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.882916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.894829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.894894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.906783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.906833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.918835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.918914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.930828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.930882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.942880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.942967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.954818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.954865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.966924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.966982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.978895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.978942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:23.990823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:23.990871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.002833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.002881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.014800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.014845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.026837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.026885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.038825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.038872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.050816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.050865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.062888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.062924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.074860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.074896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.086857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.086892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.098861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.098897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.273 [2024-12-05 02:56:24.110951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.273 [2024-12-05 02:56:24.111040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.532 [2024-12-05 02:56:24.122952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.532 [2024-12-05 02:56:24.122996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.532 [2024-12-05 02:56:24.134881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.532 [2024-12-05 02:56:24.134919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.532 [2024-12-05 02:56:24.146880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.146917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.158919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.158960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.170974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.171030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.182914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.182955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.194925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.194965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.206917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.206957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.218939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.218977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.230962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.231005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.242931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.242969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.254980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.255020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.267052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.267092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.278953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.278991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.290974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.291013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.302969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.303008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.315044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.315097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.326985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.327024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.338980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.339017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.351007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.351049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.533 [2024-12-05 02:56:24.363007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.533 [2024-12-05 02:56:24.363046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.375066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.375142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.387023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.387065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.399009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.399047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.411120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.411185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.423044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.423081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.435020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.435056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.447040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.447078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.459043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.459079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.471038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.471076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.483080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.483152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.495060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.495103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.507102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.507167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.519086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.519184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.531065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.531118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.543087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.543153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.555089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.555155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.567058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.567094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.579089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.792 [2024-12-05 02:56:24.579157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.792 [2024-12-05 02:56:24.591070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.793 [2024-12-05 02:56:24.591152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.793 [2024-12-05 02:56:24.603115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.793 [2024-12-05 02:56:24.603187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.793 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67932) - No such process 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67932 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 delay0 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.052 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.052 02:56:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:54.052 [2024-12-05 02:56:24.879453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:00.626 Initializing NVMe Controllers 00:12:00.626 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.626 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:00.626 Initialization complete. Launching workers. 00:12:00.626 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:12:00.626 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:12:00.626 success 240, unsuccessful 117, failed 0 00:12:00.626 02:56:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:00.626 02:56:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:00.626 02:56:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.626 02:56:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.626 rmmod nvme_tcp 00:12:00.626 rmmod nvme_fabrics 00:12:00.626 rmmod nvme_keyring 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 67770 ']' 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 67770 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 67770 ']' 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 67770 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67770 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67770' 00:12:00.626 killing process with pid 67770 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 67770 00:12:00.626 02:56:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 67770 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:01.563 00:12:01.563 real 0m27.876s 00:12:01.563 user 0m45.810s 00:12:01.563 sys 0m6.871s 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.563 ************************************ 00:12:01.563 END TEST nvmf_zcopy 00:12:01.563 ************************************ 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.563 ************************************ 00:12:01.563 START TEST nvmf_nmic 00:12:01.563 ************************************ 00:12:01.563 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:01.823 * Looking for test storage... 00:12:01.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.823 --rc genhtml_branch_coverage=1 00:12:01.823 --rc genhtml_function_coverage=1 00:12:01.823 --rc genhtml_legend=1 00:12:01.823 --rc geninfo_all_blocks=1 00:12:01.823 --rc geninfo_unexecuted_blocks=1 00:12:01.823 00:12:01.823 ' 00:12:01.823 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.823 --rc genhtml_branch_coverage=1 00:12:01.823 --rc genhtml_function_coverage=1 00:12:01.824 --rc genhtml_legend=1 00:12:01.824 --rc geninfo_all_blocks=1 00:12:01.824 --rc geninfo_unexecuted_blocks=1 00:12:01.824 00:12:01.824 ' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.824 --rc genhtml_branch_coverage=1 00:12:01.824 --rc genhtml_function_coverage=1 00:12:01.824 --rc genhtml_legend=1 00:12:01.824 --rc geninfo_all_blocks=1 00:12:01.824 --rc geninfo_unexecuted_blocks=1 00:12:01.824 00:12:01.824 ' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.824 --rc genhtml_branch_coverage=1 00:12:01.824 --rc genhtml_function_coverage=1 00:12:01.824 --rc genhtml_legend=1 00:12:01.824 --rc geninfo_all_blocks=1 00:12:01.824 --rc geninfo_unexecuted_blocks=1 00:12:01.824 00:12:01.824 ' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:01.824 Cannot find device "nvmf_init_br" 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:01.824 Cannot find device "nvmf_init_br2" 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:01.824 Cannot find device "nvmf_tgt_br" 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:01.824 Cannot find device "nvmf_tgt_br2" 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:01.824 Cannot find device "nvmf_init_br" 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:01.824 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:02.084 Cannot find device "nvmf_init_br2" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:02.084 Cannot find device "nvmf_tgt_br" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:02.084 Cannot find device "nvmf_tgt_br2" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:02.084 Cannot find device "nvmf_br" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:02.084 Cannot find device "nvmf_init_if" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:02.084 Cannot find device "nvmf_init_if2" 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:02.084 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:02.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:02.344 00:12:02.344 --- 10.0.0.3 ping statistics --- 00:12:02.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.344 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:02.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:02.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:12:02.344 00:12:02.344 --- 10.0.0.4 ping statistics --- 00:12:02.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.344 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:02.344 00:12:02.344 --- 10.0.0.1 ping statistics --- 00:12:02.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.344 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:02.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:12:02.344 00:12:02.344 --- 10.0.0.2 ping statistics --- 00:12:02.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.344 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68328 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68328 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68328 ']' 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.344 02:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.344 [2024-12-05 02:56:33.092903] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:02.344 [2024-12-05 02:56:33.093063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.603 [2024-12-05 02:56:33.286216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.603 [2024-12-05 02:56:33.383936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.603 [2024-12-05 02:56:33.384013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.603 [2024-12-05 02:56:33.384048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.603 [2024-12-05 02:56:33.384060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.603 [2024-12-05 02:56:33.384072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.603 [2024-12-05 02:56:33.385879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.603 [2024-12-05 02:56:33.386000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.603 [2024-12-05 02:56:33.386083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.603 [2024-12-05 02:56:33.386242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.862 [2024-12-05 02:56:33.564143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 [2024-12-05 02:56:34.137090] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 Malloc0 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 [2024-12-05 02:56:34.249483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 test case1: single bdev can't be used in multiple subsystems 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 [2024-12-05 02:56:34.277239] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:03.688 [2024-12-05 02:56:34.277299] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:03.688 [2024-12-05 02:56:34.277322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.688 request: 00:12:03.688 { 00:12:03.688 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:03.688 "namespace": { 00:12:03.688 "bdev_name": "Malloc0", 00:12:03.688 "no_auto_visible": false, 00:12:03.688 "hide_metadata": false 00:12:03.688 }, 00:12:03.688 "method": "nvmf_subsystem_add_ns", 00:12:03.688 "req_id": 1 00:12:03.688 } 00:12:03.688 Got JSON-RPC error response 00:12:03.688 response: 00:12:03.688 { 00:12:03.688 "code": -32602, 00:12:03.688 "message": "Invalid parameters" 00:12:03.688 } 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:03.688 Adding namespace failed - expected result. 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:03.688 test case2: host connect to nvmf target in multiple paths 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 [2024-12-05 02:56:34.289416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:03.688 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:03.946 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.946 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.946 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.946 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.946 02:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:05.843 02:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:05.843 [global] 00:12:05.843 thread=1 00:12:05.843 invalidate=1 00:12:05.843 rw=write 00:12:05.843 time_based=1 00:12:05.843 runtime=1 00:12:05.843 ioengine=libaio 00:12:05.843 direct=1 00:12:05.843 bs=4096 00:12:05.843 iodepth=1 00:12:05.843 norandommap=0 00:12:05.843 numjobs=1 00:12:05.843 00:12:05.843 verify_dump=1 00:12:05.843 verify_backlog=512 00:12:05.843 verify_state_save=0 00:12:05.843 do_verify=1 00:12:05.843 verify=crc32c-intel 00:12:05.843 [job0] 00:12:05.843 filename=/dev/nvme0n1 00:12:05.843 Could not set queue depth (nvme0n1) 00:12:06.101 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.101 fio-3.35 00:12:06.101 Starting 1 thread 00:12:07.036 00:12:07.036 job0: (groupid=0, jobs=1): err= 0: pid=68419: Thu Dec 5 02:56:37 2024 00:12:07.036 read: IOPS=2454, BW=9818KiB/s (10.1MB/s)(9828KiB/1001msec) 00:12:07.036 slat (nsec): min=12180, max=72743, avg=16380.30, stdev=4445.38 00:12:07.036 clat (usec): min=168, max=7601, avg=222.88, stdev=214.16 00:12:07.036 lat (usec): min=185, max=7614, avg=239.26, stdev=214.56 00:12:07.036 clat percentiles (usec): 00:12:07.036 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:12:07.036 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:12:07.036 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 245], 00:12:07.036 | 99.00th=[ 318], 99.50th=[ 498], 99.90th=[ 3818], 99.95th=[ 3916], 00:12:07.036 | 99.99th=[ 7570] 00:12:07.036 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:07.036 slat (nsec): min=17655, max=99820, avg=22750.82, stdev=5232.66 00:12:07.036 clat (usec): min=110, max=726, avg=134.53, stdev=18.32 00:12:07.036 lat (usec): min=131, max=754, avg=157.28, stdev=19.50 00:12:07.036 clat percentiles (usec): 00:12:07.036 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 124], 00:12:07.036 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:12:07.036 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:12:07.036 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 277], 99.95th=[ 322], 00:12:07.036 | 99.99th=[ 725] 00:12:07.036 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:12:07.036 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:07.036 lat (usec) : 250=98.19%, 500=1.55%, 750=0.08%, 1000=0.02% 00:12:07.036 lat (msec) : 2=0.04%, 4=0.10%, 10=0.02% 00:12:07.036 cpu : usr=2.30%, sys=7.60%, ctx=5020, majf=0, minf=5 00:12:07.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.036 issued rwts: total=2457,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.036 00:12:07.036 Run status group 0 (all jobs): 00:12:07.036 READ: bw=9818KiB/s (10.1MB/s), 9818KiB/s-9818KiB/s (10.1MB/s-10.1MB/s), io=9828KiB (10.1MB), run=1001-1001msec 00:12:07.036 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:07.036 00:12:07.036 Disk stats (read/write): 00:12:07.036 nvme0n1: ios=2098/2495, merge=0/0, ticks=495/368, in_queue=863, util=90.58% 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.294 02:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.294 rmmod nvme_tcp 00:12:07.294 rmmod nvme_fabrics 00:12:07.294 rmmod nvme_keyring 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68328 ']' 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68328 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68328 ']' 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68328 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68328 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68328' 00:12:07.294 killing process with pid 68328 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68328 00:12:07.294 02:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68328 00:12:08.665 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.665 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:08.666 00:12:08.666 real 0m7.088s 00:12:08.666 user 0m21.584s 00:12:08.666 sys 0m2.367s 00:12:08.666 ************************************ 00:12:08.666 END TEST nvmf_nmic 00:12:08.666 ************************************ 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.666 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.924 ************************************ 00:12:08.924 START TEST nvmf_fio_target 00:12:08.924 ************************************ 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:08.924 * Looking for test storage... 00:12:08.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.924 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.924 --rc genhtml_branch_coverage=1 00:12:08.924 --rc genhtml_function_coverage=1 00:12:08.924 --rc genhtml_legend=1 00:12:08.924 --rc geninfo_all_blocks=1 00:12:08.924 --rc geninfo_unexecuted_blocks=1 00:12:08.924 00:12:08.924 ' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.925 --rc genhtml_branch_coverage=1 00:12:08.925 --rc genhtml_function_coverage=1 00:12:08.925 --rc genhtml_legend=1 00:12:08.925 --rc geninfo_all_blocks=1 00:12:08.925 --rc geninfo_unexecuted_blocks=1 00:12:08.925 00:12:08.925 ' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.925 --rc genhtml_branch_coverage=1 00:12:08.925 --rc genhtml_function_coverage=1 00:12:08.925 --rc genhtml_legend=1 00:12:08.925 --rc geninfo_all_blocks=1 00:12:08.925 --rc geninfo_unexecuted_blocks=1 00:12:08.925 00:12:08.925 ' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.925 --rc genhtml_branch_coverage=1 00:12:08.925 --rc genhtml_function_coverage=1 00:12:08.925 --rc genhtml_legend=1 00:12:08.925 --rc geninfo_all_blocks=1 00:12:08.925 --rc geninfo_unexecuted_blocks=1 00:12:08.925 00:12:08.925 ' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.925 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:08.925 Cannot find device "nvmf_init_br" 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:08.925 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:09.183 Cannot find device "nvmf_init_br2" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:09.183 Cannot find device "nvmf_tgt_br" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.183 Cannot find device "nvmf_tgt_br2" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:09.183 Cannot find device "nvmf_init_br" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:09.183 Cannot find device "nvmf_init_br2" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:09.183 Cannot find device "nvmf_tgt_br" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:09.183 Cannot find device "nvmf_tgt_br2" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:09.183 Cannot find device "nvmf_br" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:09.183 Cannot find device "nvmf_init_if" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:09.183 Cannot find device "nvmf_init_if2" 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:09.183 02:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:09.183 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:09.183 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:09.183 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:09.183 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:09.183 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:09.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:09.441 00:12:09.441 --- 10.0.0.3 ping statistics --- 00:12:09.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.441 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:09.441 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:09.441 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:12:09.441 00:12:09.441 --- 10.0.0.4 ping statistics --- 00:12:09.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.441 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:09.441 00:12:09.441 --- 10.0.0.1 ping statistics --- 00:12:09.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.441 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:09.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:09.441 00:12:09.441 --- 10.0.0.2 ping statistics --- 00:12:09.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.441 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=68666 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 68666 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 68666 ']' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.441 02:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.698 [2024-12-05 02:56:40.304858] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:09.698 [2024-12-05 02:56:40.305025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.698 [2024-12-05 02:56:40.494484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.956 [2024-12-05 02:56:40.618210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.956 [2024-12-05 02:56:40.618266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.956 [2024-12-05 02:56:40.618301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.956 [2024-12-05 02:56:40.618337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.956 [2024-12-05 02:56:40.618364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.956 [2024-12-05 02:56:40.620105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.956 [2024-12-05 02:56:40.620266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.956 [2024-12-05 02:56:40.620447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.956 [2024-12-05 02:56:40.621005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.956 [2024-12-05 02:56:40.784127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.521 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:10.777 [2024-12-05 02:56:41.614408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.034 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.291 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:11.291 02:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.549 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:11.549 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.807 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:11.807 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:12.066 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:12.066 02:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:12.325 02:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:12.892 02:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:12.892 02:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:13.150 02:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:13.150 02:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:13.409 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:13.409 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:13.667 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.935 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:13.935 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.207 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:14.207 02:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.477 02:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:14.738 [2024-12-05 02:56:45.435320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:14.738 02:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:14.997 02:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:15.256 02:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:15.515 02:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:17.434 02:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:17.434 [global] 00:12:17.434 thread=1 00:12:17.434 invalidate=1 00:12:17.434 rw=write 00:12:17.434 time_based=1 00:12:17.434 runtime=1 00:12:17.434 ioengine=libaio 00:12:17.434 direct=1 00:12:17.434 bs=4096 00:12:17.434 iodepth=1 00:12:17.434 norandommap=0 00:12:17.434 numjobs=1 00:12:17.434 00:12:17.434 verify_dump=1 00:12:17.434 verify_backlog=512 00:12:17.434 verify_state_save=0 00:12:17.434 do_verify=1 00:12:17.434 verify=crc32c-intel 00:12:17.434 [job0] 00:12:17.434 filename=/dev/nvme0n1 00:12:17.434 [job1] 00:12:17.434 filename=/dev/nvme0n2 00:12:17.434 [job2] 00:12:17.434 filename=/dev/nvme0n3 00:12:17.434 [job3] 00:12:17.434 filename=/dev/nvme0n4 00:12:17.434 Could not set queue depth (nvme0n1) 00:12:17.434 Could not set queue depth (nvme0n2) 00:12:17.434 Could not set queue depth (nvme0n3) 00:12:17.434 Could not set queue depth (nvme0n4) 00:12:17.693 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.693 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.693 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.693 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.693 fio-3.35 00:12:17.693 Starting 4 threads 00:12:19.072 00:12:19.072 job0: (groupid=0, jobs=1): err= 0: pid=68857: Thu Dec 5 02:56:49 2024 00:12:19.072 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:19.072 slat (usec): min=11, max=104, avg=14.26, stdev= 4.43 00:12:19.072 clat (usec): min=161, max=2306, avg=200.34, stdev=80.53 00:12:19.072 lat (usec): min=174, max=2323, avg=214.60, stdev=81.15 00:12:19.072 clat percentiles (usec): 00:12:19.072 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:12:19.072 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:12:19.072 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 233], 00:12:19.072 | 99.00th=[ 293], 99.50th=[ 412], 99.90th=[ 1975], 99.95th=[ 2278], 00:12:19.072 | 99.99th=[ 2311] 00:12:19.072 write: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1001msec); 0 zone resets 00:12:19.072 slat (usec): min=13, max=105, avg=21.54, stdev= 6.77 00:12:19.072 clat (usec): min=111, max=359, avg=145.21, stdev=22.20 00:12:19.072 lat (usec): min=129, max=442, avg=166.76, stdev=24.14 00:12:19.072 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 127], 00:12:19.073 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 147], 00:12:19.073 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 184], 00:12:19.073 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 338], 99.95th=[ 351], 00:12:19.073 | 99.99th=[ 359] 00:12:19.073 bw ( KiB/s): min=12288, max=12288, per=37.13%, avg=12288.00, stdev= 0.00, samples=1 00:12:19.073 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:19.073 lat (usec) : 250=98.85%, 500=0.94%, 750=0.10%, 1000=0.02% 00:12:19.073 lat (msec) : 2=0.06%, 4=0.04% 00:12:19.073 cpu : usr=2.00%, sys=7.40%, ctx=5212, majf=0, minf=13 00:12:19.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 issued rwts: total=2560,2650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.073 job1: (groupid=0, jobs=1): err= 0: pid=68858: Thu Dec 5 02:56:49 2024 00:12:19.073 read: IOPS=1482, BW=5930KiB/s (6072kB/s)(5936KiB/1001msec) 00:12:19.073 slat (usec): min=9, max=539, avg=15.44, stdev=14.54 00:12:19.073 clat (usec): min=3, max=4487, avg=347.67, stdev=133.29 00:12:19.073 lat (usec): min=229, max=4500, avg=363.11, stdev=133.18 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:12:19.073 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:12:19.073 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 388], 00:12:19.073 | 99.00th=[ 412], 99.50th=[ 486], 99.90th=[ 2442], 99.95th=[ 4490], 00:12:19.073 | 99.99th=[ 4490] 00:12:19.073 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:19.073 slat (nsec): min=13350, max=79024, avg=27806.97, stdev=6869.38 00:12:19.073 clat (usec): min=188, max=556, avg=268.20, stdev=26.13 00:12:19.073 lat (usec): min=212, max=589, avg=296.00, stdev=26.99 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:12:19.073 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 273], 00:12:19.073 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:12:19.073 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 553], 99.95th=[ 553], 00:12:19.073 | 99.99th=[ 553] 00:12:19.073 bw ( KiB/s): min= 8192, max= 8192, per=24.75%, avg=8192.00, stdev= 0.00, samples=1 00:12:19.073 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:19.073 lat (usec) : 4=0.03%, 250=13.21%, 500=86.46%, 750=0.13%, 1000=0.03% 00:12:19.073 lat (msec) : 2=0.07%, 4=0.03%, 10=0.03% 00:12:19.073 cpu : usr=1.60%, sys=5.50%, ctx=3020, majf=0, minf=13 00:12:19.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 issued rwts: total=1484,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.073 job2: (groupid=0, jobs=1): err= 0: pid=68859: Thu Dec 5 02:56:49 2024 00:12:19.073 read: IOPS=2315, BW=9263KiB/s (9485kB/s)(9272KiB/1001msec) 00:12:19.073 slat (nsec): min=10754, max=57776, avg=13288.06, stdev=3920.91 00:12:19.073 clat (usec): min=177, max=2314, avg=215.17, stdev=64.92 00:12:19.073 lat (usec): min=189, max=2329, avg=228.46, stdev=65.25 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:12:19.073 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:12:19.073 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 251], 00:12:19.073 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 627], 99.95th=[ 2245], 00:12:19.073 | 99.99th=[ 2311] 00:12:19.073 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:19.073 slat (usec): min=13, max=106, avg=20.63, stdev= 6.26 00:12:19.073 clat (usec): min=125, max=359, avg=160.05, stdev=20.66 00:12:19.073 lat (usec): min=142, max=465, avg=180.68, stdev=22.54 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:12:19.073 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:12:19.073 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 198], 00:12:19.073 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 255], 99.95th=[ 265], 00:12:19.073 | 99.99th=[ 359] 00:12:19.073 bw ( KiB/s): min=11280, max=11280, per=34.08%, avg=11280.00, stdev= 0.00, samples=1 00:12:19.073 iops : min= 2820, max= 2820, avg=2820.00, stdev= 0.00, samples=1 00:12:19.073 lat (usec) : 250=97.40%, 500=2.54%, 750=0.02% 00:12:19.073 lat (msec) : 4=0.04% 00:12:19.073 cpu : usr=1.80%, sys=6.90%, ctx=4878, majf=0, minf=5 00:12:19.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 issued rwts: total=2318,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.073 job3: (groupid=0, jobs=1): err= 0: pid=68860: Thu Dec 5 02:56:49 2024 00:12:19.073 read: IOPS=1482, BW=5930KiB/s (6072kB/s)(5936KiB/1001msec) 00:12:19.073 slat (nsec): min=10042, max=63651, avg=16837.86, stdev=5594.65 00:12:19.073 clat (usec): min=276, max=4465, avg=346.46, stdev=133.95 00:12:19.073 lat (usec): min=294, max=4487, avg=363.30, stdev=134.20 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 318], 00:12:19.073 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:12:19.073 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 388], 00:12:19.073 | 99.00th=[ 412], 99.50th=[ 457], 99.90th=[ 2376], 99.95th=[ 4490], 00:12:19.073 | 99.99th=[ 4490] 00:12:19.073 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:19.073 slat (usec): min=12, max=105, avg=19.87, stdev= 5.69 00:12:19.073 clat (usec): min=204, max=632, avg=276.84, stdev=26.55 00:12:19.073 lat (usec): min=230, max=656, avg=296.72, stdev=26.75 00:12:19.073 clat percentiles (usec): 00:12:19.073 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:12:19.073 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:12:19.073 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:12:19.073 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 461], 99.95th=[ 635], 00:12:19.073 | 99.99th=[ 635] 00:12:19.073 bw ( KiB/s): min= 8192, max= 8192, per=24.75%, avg=8192.00, stdev= 0.00, samples=1 00:12:19.073 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:19.073 lat (usec) : 250=6.79%, 500=92.95%, 750=0.10% 00:12:19.073 lat (msec) : 2=0.07%, 4=0.07%, 10=0.03% 00:12:19.073 cpu : usr=2.00%, sys=4.00%, ctx=3021, majf=0, minf=7 00:12:19.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.073 issued rwts: total=1484,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.073 00:12:19.073 Run status group 0 (all jobs): 00:12:19.073 READ: bw=30.6MiB/s (32.1MB/s), 5930KiB/s-9.99MiB/s (6072kB/s-10.5MB/s), io=30.6MiB (32.1MB), run=1001-1001msec 00:12:19.073 WRITE: bw=32.3MiB/s (33.9MB/s), 6138KiB/s-10.3MiB/s (6285kB/s-10.8MB/s), io=32.4MiB (33.9MB), run=1001-1001msec 00:12:19.073 00:12:19.073 Disk stats (read/write): 00:12:19.073 nvme0n1: ios=2098/2466, merge=0/0, ticks=472/398, in_queue=870, util=88.38% 00:12:19.073 nvme0n2: ios=1121/1536, merge=0/0, ticks=380/431, in_queue=811, util=87.69% 00:12:19.073 nvme0n3: ios=2048/2104, merge=0/0, ticks=462/377, in_queue=839, util=89.10% 00:12:19.073 nvme0n4: ios=1098/1536, merge=0/0, ticks=376/384, in_queue=760, util=89.55% 00:12:19.073 02:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:19.073 [global] 00:12:19.073 thread=1 00:12:19.073 invalidate=1 00:12:19.073 rw=randwrite 00:12:19.073 time_based=1 00:12:19.073 runtime=1 00:12:19.073 ioengine=libaio 00:12:19.073 direct=1 00:12:19.073 bs=4096 00:12:19.073 iodepth=1 00:12:19.073 norandommap=0 00:12:19.073 numjobs=1 00:12:19.073 00:12:19.073 verify_dump=1 00:12:19.073 verify_backlog=512 00:12:19.073 verify_state_save=0 00:12:19.073 do_verify=1 00:12:19.073 verify=crc32c-intel 00:12:19.073 [job0] 00:12:19.073 filename=/dev/nvme0n1 00:12:19.073 [job1] 00:12:19.073 filename=/dev/nvme0n2 00:12:19.073 [job2] 00:12:19.073 filename=/dev/nvme0n3 00:12:19.073 [job3] 00:12:19.073 filename=/dev/nvme0n4 00:12:19.073 Could not set queue depth (nvme0n1) 00:12:19.073 Could not set queue depth (nvme0n2) 00:12:19.073 Could not set queue depth (nvme0n3) 00:12:19.073 Could not set queue depth (nvme0n4) 00:12:19.073 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.073 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.073 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.073 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.073 fio-3.35 00:12:19.073 Starting 4 threads 00:12:20.467 00:12:20.467 job0: (groupid=0, jobs=1): err= 0: pid=68913: Thu Dec 5 02:56:50 2024 00:12:20.467 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:20.467 slat (usec): min=20, max=174, avg=28.40, stdev= 8.31 00:12:20.467 clat (usec): min=211, max=918, avg=429.11, stdev=78.61 00:12:20.467 lat (usec): min=234, max=959, avg=457.51, stdev=79.93 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 383], 00:12:20.467 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 424], 00:12:20.467 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 515], 00:12:20.467 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 922], 00:12:20.467 | 99.99th=[ 922] 00:12:20.467 write: IOPS=1443, BW=5774KiB/s (5913kB/s)(5780KiB/1001msec); 0 zone resets 00:12:20.467 slat (usec): min=27, max=111, avg=40.67, stdev= 8.85 00:12:20.467 clat (usec): min=134, max=1566, avg=320.50, stdev=84.22 00:12:20.467 lat (usec): min=170, max=1616, avg=361.17, stdev=84.26 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 210], 20.00th=[ 277], 00:12:20.467 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 338], 00:12:20.467 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 433], 00:12:20.467 | 99.00th=[ 474], 99.50th=[ 523], 99.90th=[ 1123], 99.95th=[ 1565], 00:12:20.467 | 99.99th=[ 1565] 00:12:20.467 bw ( KiB/s): min= 5952, max= 5952, per=22.31%, avg=5952.00, stdev= 0.00, samples=1 00:12:20.467 iops : min= 1488, max= 1488, avg=1488.00, stdev= 0.00, samples=1 00:12:20.467 lat (usec) : 250=7.41%, 500=89.59%, 750=2.03%, 1000=0.89% 00:12:20.467 lat (msec) : 2=0.08% 00:12:20.467 cpu : usr=2.00%, sys=7.40%, ctx=2470, majf=0, minf=5 00:12:20.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 issued rwts: total=1024,1445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.467 job1: (groupid=0, jobs=1): err= 0: pid=68914: Thu Dec 5 02:56:50 2024 00:12:20.467 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:20.467 slat (nsec): min=12027, max=60982, avg=23708.60, stdev=6419.05 00:12:20.467 clat (usec): min=343, max=1065, avg=438.30, stdev=56.43 00:12:20.467 lat (usec): min=372, max=1079, avg=462.01, stdev=57.76 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 396], 00:12:20.467 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:12:20.467 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 519], 95.00th=[ 553], 00:12:20.467 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 1074], 00:12:20.467 | 99.99th=[ 1074] 00:12:20.467 write: IOPS=1333, BW=5335KiB/s (5463kB/s)(5340KiB/1001msec); 0 zone resets 00:12:20.467 slat (usec): min=14, max=124, avg=34.24, stdev=10.54 00:12:20.467 clat (usec): min=208, max=1531, avg=354.90, stdev=69.74 00:12:20.467 lat (usec): min=245, max=1567, avg=389.15, stdev=70.70 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 239], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 310], 00:12:20.467 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:12:20.467 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 445], 95.00th=[ 486], 00:12:20.467 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 652], 99.95th=[ 1532], 00:12:20.467 | 99.99th=[ 1532] 00:12:20.467 bw ( KiB/s): min= 5232, max= 5232, per=19.61%, avg=5232.00, stdev= 0.00, samples=1 00:12:20.467 iops : min= 1308, max= 1308, avg=1308.00, stdev= 0.00, samples=1 00:12:20.467 lat (usec) : 250=1.14%, 500=91.48%, 750=7.29% 00:12:20.467 lat (msec) : 2=0.08% 00:12:20.467 cpu : usr=1.40%, sys=6.50%, ctx=2359, majf=0, minf=9 00:12:20.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 issued rwts: total=1024,1335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.467 job2: (groupid=0, jobs=1): err= 0: pid=68916: Thu Dec 5 02:56:50 2024 00:12:20.467 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:20.467 slat (nsec): min=10947, max=69361, avg=15172.45, stdev=4718.51 00:12:20.467 clat (usec): min=359, max=725, avg=447.65, stdev=55.73 00:12:20.467 lat (usec): min=372, max=761, avg=462.82, stdev=55.87 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 404], 00:12:20.467 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 445], 00:12:20.467 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 570], 00:12:20.467 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 725], 00:12:20.467 | 99.99th=[ 725] 00:12:20.467 write: IOPS=1334, BW=5339KiB/s (5467kB/s)(5344KiB/1001msec); 0 zone resets 00:12:20.467 slat (nsec): min=14993, max=86766, avg=26731.79, stdev=8178.17 00:12:20.467 clat (usec): min=149, max=1494, avg=363.21, stdev=72.14 00:12:20.467 lat (usec): min=204, max=1544, avg=389.94, stdev=72.76 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 247], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:12:20.467 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:12:20.467 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 461], 95.00th=[ 494], 00:12:20.467 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 1500], 00:12:20.467 | 99.99th=[ 1500] 00:12:20.467 bw ( KiB/s): min= 5232, max= 5232, per=19.61%, avg=5232.00, stdev= 0.00, samples=1 00:12:20.467 iops : min= 1308, max= 1308, avg=1308.00, stdev= 0.00, samples=1 00:12:20.467 lat (usec) : 250=0.76%, 500=90.72%, 750=8.47% 00:12:20.467 lat (msec) : 2=0.04% 00:12:20.467 cpu : usr=2.20%, sys=3.40%, ctx=2361, majf=0, minf=21 00:12:20.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 issued rwts: total=1024,1336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.467 job3: (groupid=0, jobs=1): err= 0: pid=68918: Thu Dec 5 02:56:50 2024 00:12:20.467 read: IOPS=2259, BW=9039KiB/s (9256kB/s)(9048KiB/1001msec) 00:12:20.467 slat (nsec): min=10877, max=50266, avg=13892.17, stdev=4554.56 00:12:20.467 clat (usec): min=180, max=460, avg=217.09, stdev=23.77 00:12:20.467 lat (usec): min=193, max=482, avg=230.98, stdev=24.51 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:12:20.467 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:12:20.467 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 262], 00:12:20.467 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 371], 00:12:20.467 | 99.99th=[ 461] 00:12:20.467 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:20.467 slat (nsec): min=13824, max=89583, avg=22043.37, stdev=6869.16 00:12:20.467 clat (usec): min=126, max=563, avg=160.73, stdev=24.25 00:12:20.467 lat (usec): min=144, max=592, avg=182.77, stdev=25.95 00:12:20.467 clat percentiles (usec): 00:12:20.467 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:12:20.467 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:12:20.467 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 192], 95.00th=[ 202], 00:12:20.467 | 99.00th=[ 225], 99.50th=[ 251], 99.90th=[ 273], 99.95th=[ 469], 00:12:20.467 | 99.99th=[ 562] 00:12:20.467 bw ( KiB/s): min=10936, max=10936, per=40.99%, avg=10936.00, stdev= 0.00, samples=1 00:12:20.467 iops : min= 2734, max= 2734, avg=2734.00, stdev= 0.00, samples=1 00:12:20.467 lat (usec) : 250=95.79%, 500=4.19%, 750=0.02% 00:12:20.467 cpu : usr=1.90%, sys=7.20%, ctx=4824, majf=0, minf=11 00:12:20.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.467 issued rwts: total=2262,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.467 00:12:20.467 Run status group 0 (all jobs): 00:12:20.467 READ: bw=20.8MiB/s (21.8MB/s), 4092KiB/s-9039KiB/s (4190kB/s-9256kB/s), io=20.8MiB (21.8MB), run=1001-1001msec 00:12:20.467 WRITE: bw=26.1MiB/s (27.3MB/s), 5335KiB/s-9.99MiB/s (5463kB/s-10.5MB/s), io=26.1MiB (27.3MB), run=1001-1001msec 00:12:20.467 00:12:20.467 Disk stats (read/write): 00:12:20.467 nvme0n1: ios=1074/1101, merge=0/0, ticks=509/380, in_queue=889, util=90.08% 00:12:20.467 nvme0n2: ios=1067/1024, merge=0/0, ticks=457/364, in_queue=821, util=89.30% 00:12:20.468 nvme0n3: ios=1024/1024, merge=0/0, ticks=443/352, in_queue=795, util=89.65% 00:12:20.468 nvme0n4: ios=2054/2123, merge=0/0, ticks=466/376, in_queue=842, util=89.92% 00:12:20.468 02:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:20.468 [global] 00:12:20.468 thread=1 00:12:20.468 invalidate=1 00:12:20.468 rw=write 00:12:20.468 time_based=1 00:12:20.468 runtime=1 00:12:20.468 ioengine=libaio 00:12:20.468 direct=1 00:12:20.468 bs=4096 00:12:20.468 iodepth=128 00:12:20.468 norandommap=0 00:12:20.468 numjobs=1 00:12:20.468 00:12:20.468 verify_dump=1 00:12:20.468 verify_backlog=512 00:12:20.468 verify_state_save=0 00:12:20.468 do_verify=1 00:12:20.468 verify=crc32c-intel 00:12:20.468 [job0] 00:12:20.468 filename=/dev/nvme0n1 00:12:20.468 [job1] 00:12:20.468 filename=/dev/nvme0n2 00:12:20.468 [job2] 00:12:20.468 filename=/dev/nvme0n3 00:12:20.468 [job3] 00:12:20.468 filename=/dev/nvme0n4 00:12:20.468 Could not set queue depth (nvme0n1) 00:12:20.468 Could not set queue depth (nvme0n2) 00:12:20.468 Could not set queue depth (nvme0n3) 00:12:20.468 Could not set queue depth (nvme0n4) 00:12:20.468 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:20.468 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:20.468 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:20.468 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:20.468 fio-3.35 00:12:20.468 Starting 4 threads 00:12:21.848 00:12:21.848 job0: (groupid=0, jobs=1): err= 0: pid=68981: Thu Dec 5 02:56:52 2024 00:12:21.848 read: IOPS=4809, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec) 00:12:21.848 slat (usec): min=4, max=3889, avg=97.69, stdev=392.99 00:12:21.848 clat (usec): min=527, max=16778, avg=12930.45, stdev=1344.84 00:12:21.848 lat (usec): min=2307, max=17728, avg=13028.15, stdev=1377.34 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[ 6587], 5.00th=[11338], 10.00th=[11994], 20.00th=[12518], 00:12:21.848 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:12:21.848 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14353], 95.00th=[14746], 00:12:21.848 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[16188], 00:12:21.848 | 99.99th=[16909] 00:12:21.848 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:21.848 slat (usec): min=10, max=3657, avg=95.67, stdev=470.07 00:12:21.848 clat (usec): min=9751, max=16308, avg=12542.93, stdev=787.56 00:12:21.848 lat (usec): min=9768, max=16355, avg=12638.60, stdev=903.25 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:12:21.848 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:12:21.848 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[14484], 00:12:21.848 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16057], 99.95th=[16188], 00:12:21.848 | 99.99th=[16319] 00:12:21.848 bw ( KiB/s): min=20480, max=20480, per=34.77%, avg=20480.00, stdev= 0.00, samples=2 00:12:21.848 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:21.848 lat (usec) : 750=0.01% 00:12:21.848 lat (msec) : 4=0.20%, 10=0.70%, 20=99.08% 00:12:21.848 cpu : usr=4.70%, sys=13.19%, ctx=352, majf=0, minf=6 00:12:21.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:21.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.848 issued rwts: total=4819,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.848 job1: (groupid=0, jobs=1): err= 0: pid=68982: Thu Dec 5 02:56:52 2024 00:12:21.848 read: IOPS=2099, BW=8398KiB/s (8600kB/s)(8432KiB/1004msec) 00:12:21.848 slat (usec): min=6, max=10062, avg=221.31, stdev=1144.51 00:12:21.848 clat (usec): min=822, max=32315, avg=26947.48, stdev=3579.47 00:12:21.848 lat (usec): min=3958, max=32330, avg=27168.79, stdev=3418.29 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[ 7832], 5.00th=[21890], 10.00th=[23987], 20.00th=[25297], 00:12:21.848 | 30.00th=[27395], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:12:21.848 | 70.00th=[28443], 80.00th=[28705], 90.00th=[29230], 95.00th=[30540], 00:12:21.848 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:12:21.848 | 99.99th=[32375] 00:12:21.848 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:12:21.848 slat (usec): min=10, max=8122, avg=201.08, stdev=991.48 00:12:21.848 clat (usec): min=17107, max=33104, avg=27189.47, stdev=2286.74 00:12:21.848 lat (usec): min=17255, max=33127, avg=27390.55, stdev=2057.02 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[20579], 5.00th=[22414], 10.00th=[24773], 20.00th=[26346], 00:12:21.848 | 30.00th=[26608], 40.00th=[26870], 50.00th=[27132], 60.00th=[27395], 00:12:21.848 | 70.00th=[27657], 80.00th=[28705], 90.00th=[30278], 95.00th=[31327], 00:12:21.848 | 99.00th=[31851], 99.50th=[32113], 99.90th=[33162], 99.95th=[33162], 00:12:21.848 | 99.99th=[33162] 00:12:21.848 bw ( KiB/s): min= 9536, max=10400, per=16.92%, avg=9968.00, stdev=610.94, samples=2 00:12:21.848 iops : min= 2384, max= 2600, avg=2492.00, stdev=152.74, samples=2 00:12:21.848 lat (usec) : 1000=0.02% 00:12:21.848 lat (msec) : 4=0.06%, 10=0.51%, 20=1.24%, 50=98.16% 00:12:21.848 cpu : usr=2.79%, sys=6.68%, ctx=180, majf=0, minf=7 00:12:21.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:21.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.848 issued rwts: total=2108,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.848 job2: (groupid=0, jobs=1): err= 0: pid=68984: Thu Dec 5 02:56:52 2024 00:12:21.848 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:12:21.848 slat (usec): min=5, max=5970, avg=116.63, stdev=531.07 00:12:21.848 clat (usec): min=10316, max=21718, avg=15183.72, stdev=1252.85 00:12:21.848 lat (usec): min=10952, max=21734, avg=15300.35, stdev=1265.58 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[11863], 5.00th=[13173], 10.00th=[13698], 20.00th=[14615], 00:12:21.848 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:12:21.848 | 70.00th=[15533], 80.00th=[15664], 90.00th=[16057], 95.00th=[17433], 00:12:21.848 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21627], 99.95th=[21627], 00:12:21.848 | 99.99th=[21627] 00:12:21.848 write: IOPS=4529, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1003msec); 0 zone resets 00:12:21.848 slat (usec): min=11, max=7340, avg=107.50, stdev=659.23 00:12:21.848 clat (usec): min=484, max=23374, avg=14223.22, stdev=1798.33 00:12:21.848 lat (usec): min=5336, max=23426, avg=14330.72, stdev=1896.23 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[ 6456], 5.00th=[11469], 10.00th=[13042], 20.00th=[13566], 00:12:21.848 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14484], 00:12:21.848 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15795], 95.00th=[16909], 00:12:21.848 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21103], 99.95th=[22676], 00:12:21.848 | 99.99th=[23462] 00:12:21.848 bw ( KiB/s): min=17178, max=18176, per=30.01%, avg=17677.00, stdev=705.69, samples=2 00:12:21.848 iops : min= 4294, max= 4544, avg=4419.00, stdev=176.78, samples=2 00:12:21.848 lat (usec) : 500=0.01% 00:12:21.848 lat (msec) : 10=1.47%, 20=97.96%, 50=0.56% 00:12:21.848 cpu : usr=3.99%, sys=12.28%, ctx=262, majf=0, minf=5 00:12:21.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:21.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.848 issued rwts: total=4096,4543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.848 job3: (groupid=0, jobs=1): err= 0: pid=68985: Thu Dec 5 02:56:52 2024 00:12:21.848 read: IOPS=2064, BW=8259KiB/s (8457kB/s)(8284KiB/1003msec) 00:12:21.848 slat (usec): min=6, max=9592, avg=218.06, stdev=1099.91 00:12:21.848 clat (usec): min=1091, max=31614, avg=28039.67, stdev=2913.31 00:12:21.848 lat (usec): min=3977, max=31623, avg=28257.73, stdev=2703.92 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[ 7504], 5.00th=[25035], 10.00th=[27132], 20.00th=[27657], 00:12:21.848 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:12:21.848 | 70.00th=[28705], 80.00th=[28967], 90.00th=[29754], 95.00th=[31065], 00:12:21.848 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:12:21.848 | 99.99th=[31589] 00:12:21.848 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:12:21.848 slat (usec): min=10, max=8364, avg=206.68, stdev=1019.01 00:12:21.848 clat (usec): min=7567, max=29008, avg=26460.76, stdev=2672.17 00:12:21.848 lat (usec): min=7593, max=29429, avg=26667.44, stdev=2486.39 00:12:21.848 clat percentiles (usec): 00:12:21.848 | 1.00th=[12387], 5.00th=[21365], 10.00th=[25822], 20.00th=[26346], 00:12:21.848 | 30.00th=[26608], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:12:21.848 | 70.00th=[27395], 80.00th=[27657], 90.00th=[28443], 95.00th=[28705], 00:12:21.848 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:12:21.848 | 99.99th=[28967] 00:12:21.848 bw ( KiB/s): min= 9272, max=10368, per=16.67%, avg=9820.00, stdev=774.99, samples=2 00:12:21.848 iops : min= 2318, max= 2592, avg=2455.00, stdev=193.75, samples=2 00:12:21.848 lat (msec) : 2=0.02%, 4=0.02%, 10=0.80%, 20=1.45%, 50=97.71% 00:12:21.848 cpu : usr=2.20%, sys=7.49%, ctx=172, majf=0, minf=11 00:12:21.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:21.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.848 issued rwts: total=2071,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.848 00:12:21.848 Run status group 0 (all jobs): 00:12:21.848 READ: bw=50.9MiB/s (53.4MB/s), 8259KiB/s-18.8MiB/s (8457kB/s-19.7MB/s), io=51.1MiB (53.6MB), run=1002-1004msec 00:12:21.848 WRITE: bw=57.5MiB/s (60.3MB/s), 9.96MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=57.7MiB (60.6MB), run=1002-1004msec 00:12:21.848 00:12:21.848 Disk stats (read/write): 00:12:21.848 nvme0n1: ios=4146/4480, merge=0/0, ticks=16823/15513, in_queue=32336, util=89.47% 00:12:21.848 nvme0n2: ios=2001/2048, merge=0/0, ticks=13280/12375, in_queue=25655, util=88.78% 00:12:21.848 nvme0n3: ios=3584/3848, merge=0/0, ticks=26515/23410, in_queue=49925, util=89.19% 00:12:21.848 nvme0n4: ios=1926/2048, merge=0/0, ticks=12989/12713, in_queue=25702, util=89.54% 00:12:21.848 02:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:21.848 [global] 00:12:21.849 thread=1 00:12:21.849 invalidate=1 00:12:21.849 rw=randwrite 00:12:21.849 time_based=1 00:12:21.849 runtime=1 00:12:21.849 ioengine=libaio 00:12:21.849 direct=1 00:12:21.849 bs=4096 00:12:21.849 iodepth=128 00:12:21.849 norandommap=0 00:12:21.849 numjobs=1 00:12:21.849 00:12:21.849 verify_dump=1 00:12:21.849 verify_backlog=512 00:12:21.849 verify_state_save=0 00:12:21.849 do_verify=1 00:12:21.849 verify=crc32c-intel 00:12:21.849 [job0] 00:12:21.849 filename=/dev/nvme0n1 00:12:21.849 [job1] 00:12:21.849 filename=/dev/nvme0n2 00:12:21.849 [job2] 00:12:21.849 filename=/dev/nvme0n3 00:12:21.849 [job3] 00:12:21.849 filename=/dev/nvme0n4 00:12:21.849 Could not set queue depth (nvme0n1) 00:12:21.849 Could not set queue depth (nvme0n2) 00:12:21.849 Could not set queue depth (nvme0n3) 00:12:21.849 Could not set queue depth (nvme0n4) 00:12:21.849 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.849 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.849 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.849 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.849 fio-3.35 00:12:21.849 Starting 4 threads 00:12:23.230 00:12:23.230 job0: (groupid=0, jobs=1): err= 0: pid=69038: Thu Dec 5 02:56:53 2024 00:12:23.230 read: IOPS=4905, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1006msec) 00:12:23.230 slat (usec): min=8, max=6637, avg=94.67, stdev=594.69 00:12:23.230 clat (usec): min=1855, max=21096, avg=13200.08, stdev=1638.20 00:12:23.230 lat (usec): min=5699, max=25196, avg=13294.75, stdev=1660.22 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[ 6587], 5.00th=[ 9503], 10.00th=[12387], 20.00th=[12780], 00:12:23.230 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:12:23.230 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:12:23.230 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:12:23.230 | 99.99th=[21103] 00:12:23.230 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:12:23.230 slat (usec): min=9, max=8788, avg=96.80, stdev=574.74 00:12:23.230 clat (usec): min=6425, max=16876, avg=12167.89, stdev=1107.69 00:12:23.230 lat (usec): min=8373, max=16898, avg=12264.69, stdev=982.19 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[ 7963], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:12:23.230 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:12:23.230 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13173], 00:12:23.230 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:12:23.230 | 99.99th=[16909] 00:12:23.230 bw ( KiB/s): min=20480, max=20480, per=36.16%, avg=20480.00, stdev= 0.00, samples=2 00:12:23.230 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:23.230 lat (msec) : 2=0.01%, 10=3.94%, 20=95.34%, 50=0.72% 00:12:23.230 cpu : usr=4.08%, sys=13.93%, ctx=214, majf=0, minf=7 00:12:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.230 issued rwts: total=4935,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.230 job1: (groupid=0, jobs=1): err= 0: pid=69039: Thu Dec 5 02:56:53 2024 00:12:23.230 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:12:23.230 slat (usec): min=4, max=10451, avg=238.73, stdev=1044.33 00:12:23.230 clat (usec): min=14880, max=82859, avg=29630.82, stdev=12614.36 00:12:23.230 lat (usec): min=16029, max=82894, avg=29869.56, stdev=12745.70 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[16581], 5.00th=[18482], 10.00th=[18744], 20.00th=[19530], 00:12:23.230 | 30.00th=[22676], 40.00th=[26346], 50.00th=[27132], 60.00th=[27395], 00:12:23.230 | 70.00th=[27919], 80.00th=[33817], 90.00th=[49546], 95.00th=[61604], 00:12:23.230 | 99.00th=[70779], 99.50th=[76022], 99.90th=[78119], 99.95th=[83362], 00:12:23.230 | 99.99th=[83362] 00:12:23.230 write: IOPS=1727, BW=6911KiB/s (7076kB/s)(6952KiB/1006msec); 0 zone resets 00:12:23.230 slat (usec): min=13, max=10134, avg=355.84, stdev=1324.74 00:12:23.230 clat (usec): min=3331, max=91395, avg=46414.86, stdev=21548.12 00:12:23.230 lat (usec): min=7479, max=91419, avg=46770.69, stdev=21641.01 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[15008], 5.00th=[20317], 10.00th=[21890], 20.00th=[28443], 00:12:23.230 | 30.00th=[32113], 40.00th=[33424], 50.00th=[39584], 60.00th=[50070], 00:12:23.230 | 70.00th=[54264], 80.00th=[66847], 90.00th=[83362], 95.00th=[88605], 00:12:23.230 | 99.00th=[90702], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:12:23.230 | 99.99th=[91751] 00:12:23.230 bw ( KiB/s): min= 5696, max= 7198, per=11.38%, avg=6447.00, stdev=1062.07, samples=2 00:12:23.230 iops : min= 1424, max= 1799, avg=1611.50, stdev=265.17, samples=2 00:12:23.230 lat (msec) : 4=0.03%, 10=0.24%, 20=14.26%, 50=59.25%, 100=26.21% 00:12:23.230 cpu : usr=1.39%, sys=6.17%, ctx=234, majf=0, minf=16 00:12:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:12:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.230 issued rwts: total=1536,1738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.230 job2: (groupid=0, jobs=1): err= 0: pid=69040: Thu Dec 5 02:56:53 2024 00:12:23.230 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:23.230 slat (usec): min=9, max=4612, avg=116.65, stdev=462.66 00:12:23.230 clat (usec): min=10924, max=19598, avg=15177.05, stdev=1131.50 00:12:23.230 lat (usec): min=11313, max=22094, avg=15293.70, stdev=1193.89 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[11994], 5.00th=[13042], 10.00th=[14091], 20.00th=[14746], 00:12:23.230 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:12:23.230 | 70.00th=[15401], 80.00th=[15533], 90.00th=[16712], 95.00th=[17433], 00:12:23.230 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:12:23.230 | 99.99th=[19530] 00:12:23.230 write: IOPS=4308, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec); 0 zone resets 00:12:23.230 slat (usec): min=10, max=8793, avg=112.44, stdev=513.77 00:12:23.230 clat (usec): min=356, max=24308, avg=14899.57, stdev=1909.93 00:12:23.230 lat (usec): min=4130, max=24341, avg=15012.02, stdev=1961.95 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[ 9241], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:12:23.230 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:12:23.230 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16581], 95.00th=[18220], 00:12:23.230 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[23462], 00:12:23.230 | 99.99th=[24249] 00:12:23.230 bw ( KiB/s): min=16384, max=16384, per=28.93%, avg=16384.00, stdev= 0.00, samples=1 00:12:23.230 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:23.230 lat (usec) : 500=0.01% 00:12:23.230 lat (msec) : 10=0.94%, 20=98.51%, 50=0.54% 00:12:23.230 cpu : usr=4.60%, sys=12.60%, ctx=399, majf=0, minf=7 00:12:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.230 issued rwts: total=4096,4313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.230 job3: (groupid=0, jobs=1): err= 0: pid=69041: Thu Dec 5 02:56:53 2024 00:12:23.230 read: IOPS=2769, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1004msec) 00:12:23.230 slat (usec): min=8, max=9154, avg=175.71, stdev=849.79 00:12:23.230 clat (usec): min=2484, max=42980, avg=22219.18, stdev=4753.78 00:12:23.230 lat (usec): min=4966, max=43019, avg=22394.90, stdev=4784.55 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[ 7963], 5.00th=[16581], 10.00th=[17957], 20.00th=[19268], 00:12:23.230 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20317], 60.00th=[22152], 00:12:23.230 | 70.00th=[25297], 80.00th=[26870], 90.00th=[27657], 95.00th=[30016], 00:12:23.230 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:12:23.230 | 99.99th=[42730] 00:12:23.230 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:12:23.230 slat (usec): min=12, max=9749, avg=157.72, stdev=843.95 00:12:23.230 clat (usec): min=9358, max=51723, avg=21078.13, stdev=8303.21 00:12:23.230 lat (usec): min=9405, max=51743, avg=21235.85, stdev=8385.35 00:12:23.230 clat percentiles (usec): 00:12:23.230 | 1.00th=[12911], 5.00th=[14615], 10.00th=[15401], 20.00th=[15664], 00:12:23.230 | 30.00th=[15926], 40.00th=[16450], 50.00th=[18220], 60.00th=[18744], 00:12:23.230 | 70.00th=[21890], 80.00th=[22938], 90.00th=[31065], 95.00th=[42206], 00:12:23.230 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:12:23.230 | 99.99th=[51643] 00:12:23.230 bw ( KiB/s): min=12288, max=12312, per=21.72%, avg=12300.00, stdev=16.97, samples=2 00:12:23.230 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:23.230 lat (msec) : 4=0.02%, 10=0.56%, 20=52.42%, 50=46.04%, 100=0.96% 00:12:23.230 cpu : usr=2.69%, sys=9.67%, ctx=220, majf=0, minf=7 00:12:23.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:23.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.230 issued rwts: total=2781,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.230 00:12:23.230 Run status group 0 (all jobs): 00:12:23.230 READ: bw=51.8MiB/s (54.3MB/s), 6107KiB/s-19.2MiB/s (6254kB/s-20.1MB/s), io=52.1MiB (54.7MB), run=1001-1006msec 00:12:23.230 WRITE: bw=55.3MiB/s (58.0MB/s), 6911KiB/s-19.9MiB/s (7076kB/s-20.8MB/s), io=55.6MiB (58.3MB), run=1001-1006msec 00:12:23.230 00:12:23.231 Disk stats (read/write): 00:12:23.231 nvme0n1: ios=4146/4288, merge=0/0, ticks=51601/48068, in_queue=99669, util=88.58% 00:12:23.231 nvme0n2: ios=1323/1536, merge=0/0, ticks=12518/21977, in_queue=34495, util=87.08% 00:12:23.231 nvme0n3: ios=3512/3584, merge=0/0, ticks=16901/15810, in_queue=32711, util=88.54% 00:12:23.231 nvme0n4: ios=2256/2560, merge=0/0, ticks=24786/25071, in_queue=49857, util=89.48% 00:12:23.231 02:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:23.231 02:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69054 00:12:23.231 02:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:23.231 02:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:23.231 [global] 00:12:23.231 thread=1 00:12:23.231 invalidate=1 00:12:23.231 rw=read 00:12:23.231 time_based=1 00:12:23.231 runtime=10 00:12:23.231 ioengine=libaio 00:12:23.231 direct=1 00:12:23.231 bs=4096 00:12:23.231 iodepth=1 00:12:23.231 norandommap=1 00:12:23.231 numjobs=1 00:12:23.231 00:12:23.231 [job0] 00:12:23.231 filename=/dev/nvme0n1 00:12:23.231 [job1] 00:12:23.231 filename=/dev/nvme0n2 00:12:23.231 [job2] 00:12:23.231 filename=/dev/nvme0n3 00:12:23.231 [job3] 00:12:23.231 filename=/dev/nvme0n4 00:12:23.231 Could not set queue depth (nvme0n1) 00:12:23.231 Could not set queue depth (nvme0n2) 00:12:23.231 Could not set queue depth (nvme0n3) 00:12:23.231 Could not set queue depth (nvme0n4) 00:12:23.231 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.231 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.231 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.231 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.231 fio-3.35 00:12:23.231 Starting 4 threads 00:12:26.517 02:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:26.517 fio: pid=69097, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:26.517 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39055360, buflen=4096 00:12:26.517 02:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:26.776 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38789120, buflen=4096 00:12:26.776 fio: pid=69096, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:26.776 02:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:26.776 02:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:27.034 fio: pid=69094, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:27.034 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43724800, buflen=4096 00:12:27.034 02:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:27.034 02:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:27.601 fio: pid=69095, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:27.601 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56647680, buflen=4096 00:12:27.601 00:12:27.601 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69094: Thu Dec 5 02:56:58 2024 00:12:27.601 read: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(41.7MiB/3628msec) 00:12:27.601 slat (usec): min=9, max=14307, avg=22.81, stdev=222.76 00:12:27.601 clat (usec): min=156, max=3568, avg=315.17, stdev=78.74 00:12:27.601 lat (usec): min=171, max=14575, avg=337.98, stdev=236.10 00:12:27.601 clat percentiles (usec): 00:12:27.601 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 231], 20.00th=[ 297], 00:12:27.601 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:12:27.601 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 367], 00:12:27.601 | 99.00th=[ 465], 99.50th=[ 553], 99.90th=[ 1369], 99.95th=[ 1647], 00:12:27.601 | 99.99th=[ 2409] 00:12:27.601 bw ( KiB/s): min=11296, max=13248, per=27.65%, avg=11739.43, stdev=676.77, samples=7 00:12:27.601 iops : min= 2824, max= 3312, avg=2934.86, stdev=169.19, samples=7 00:12:27.601 lat (usec) : 250=10.58%, 500=88.61%, 750=0.57%, 1000=0.09% 00:12:27.601 lat (msec) : 2=0.11%, 4=0.03% 00:12:27.601 cpu : usr=0.96%, sys=4.71%, ctx=10680, majf=0, minf=1 00:12:27.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.601 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.601 issued rwts: total=10676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.601 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69095: Thu Dec 5 02:56:58 2024 00:12:27.601 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(54.0MiB/4100msec) 00:12:27.601 slat (usec): min=7, max=13711, avg=17.73, stdev=233.42 00:12:27.601 clat (usec): min=58, max=107707, avg=277.38, stdev=919.98 00:12:27.601 lat (usec): min=164, max=107718, avg=295.11, stdev=948.96 00:12:27.601 clat percentiles (usec): 00:12:27.601 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 192], 00:12:27.601 | 30.00th=[ 208], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:12:27.601 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:12:27.601 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 1352], 99.95th=[ 2180], 00:12:27.601 | 99.99th=[ 8029] 00:12:27.601 bw ( KiB/s): min=12176, max=15568, per=30.36%, avg=12887.29, stdev=1202.62, samples=7 00:12:27.601 iops : min= 3044, max= 3892, avg=3221.71, stdev=300.65, samples=7 00:12:27.601 lat (usec) : 100=0.01%, 250=36.30%, 500=63.42%, 750=0.12%, 1000=0.04% 00:12:27.601 lat (msec) : 2=0.06%, 4=0.04%, 10=0.01%, 250=0.01% 00:12:27.601 cpu : usr=0.98%, sys=4.03%, ctx=13855, majf=0, minf=2 00:12:27.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.601 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.601 issued rwts: total=13831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.601 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69096: Thu Dec 5 02:56:58 2024 00:12:27.601 read: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(37.0MiB/3327msec) 00:12:27.601 slat (usec): min=14, max=8612, avg=22.11, stdev=113.95 00:12:27.601 clat (usec): min=179, max=3815, avg=327.03, stdev=77.76 00:12:27.601 lat (usec): min=194, max=8977, avg=349.14, stdev=139.09 00:12:27.601 clat percentiles (usec): 00:12:27.601 | 1.00th=[ 235], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:12:27.601 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:12:27.601 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 367], 00:12:27.601 | 99.00th=[ 429], 99.50th=[ 545], 99.90th=[ 1123], 99.95th=[ 2114], 00:12:27.601 | 99.99th=[ 3818] 00:12:27.601 bw ( KiB/s): min=11288, max=11688, per=27.04%, avg=11478.67, stdev=161.23, samples=6 00:12:27.601 iops : min= 2822, max= 2922, avg=2869.67, stdev=40.31, samples=6 00:12:27.601 lat (usec) : 250=1.16%, 500=98.12%, 750=0.55%, 1000=0.04% 00:12:27.601 lat (msec) : 2=0.05%, 4=0.06% 00:12:27.601 cpu : usr=0.93%, sys=5.23%, ctx=9475, majf=0, minf=1 00:12:27.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.602 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.602 issued rwts: total=9471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.602 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69097: Thu Dec 5 02:56:58 2024 00:12:27.602 read: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(37.2MiB/3032msec) 00:12:27.602 slat (usec): min=7, max=382, avg=13.07, stdev= 6.33 00:12:27.602 clat (usec): min=2, max=7301, avg=303.60, stdev=119.32 00:12:27.602 lat (usec): min=201, max=7316, avg=316.67, stdev=119.19 00:12:27.602 clat percentiles (usec): 00:12:27.602 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 251], 20.00th=[ 281], 00:12:27.602 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:12:27.602 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 355], 00:12:27.602 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 1663], 99.95th=[ 3589], 00:12:27.602 | 99.99th=[ 7308] 00:12:27.602 bw ( KiB/s): min=12184, max=13848, per=29.69%, avg=12601.33, stdev=618.80, samples=6 00:12:27.602 iops : min= 3046, max= 3462, avg=3150.33, stdev=154.70, samples=6 00:12:27.602 lat (usec) : 4=0.01%, 250=9.98%, 500=89.79%, 750=0.07%, 1000=0.03% 00:12:27.602 lat (msec) : 2=0.03%, 4=0.04%, 10=0.03% 00:12:27.602 cpu : usr=0.82%, sys=4.03%, ctx=9541, majf=0, minf=2 00:12:27.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.602 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.602 issued rwts: total=9536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.602 00:12:27.602 Run status group 0 (all jobs): 00:12:27.602 READ: bw=41.5MiB/s (43.5MB/s), 11.1MiB/s-13.2MiB/s (11.7MB/s-13.8MB/s), io=170MiB (178MB), run=3032-4100msec 00:12:27.602 00:12:27.602 Disk stats (read/write): 00:12:27.602 nvme0n1: ios=10653/0, merge=0/0, ticks=3372/0, in_queue=3372, util=95.41% 00:12:27.602 nvme0n2: ios=12703/0, merge=0/0, ticks=3562/0, in_queue=3562, util=95.19% 00:12:27.602 nvme0n3: ios=8885/0, merge=0/0, ticks=2922/0, in_queue=2922, util=96.34% 00:12:27.602 nvme0n4: ios=9048/0, merge=0/0, ticks=2678/0, in_queue=2678, util=96.26% 00:12:27.602 02:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:27.602 02:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:28.169 02:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.169 02:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:28.428 02:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.428 02:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:28.995 02:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.995 02:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:29.562 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:29.562 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69054 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.820 nvmf hotplug test: fio failed as expected 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:29.820 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.078 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.078 rmmod nvme_tcp 00:12:30.078 rmmod nvme_fabrics 00:12:30.078 rmmod nvme_keyring 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 68666 ']' 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 68666 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 68666 ']' 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 68666 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68666 00:12:30.336 killing process with pid 68666 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68666' 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 68666 00:12:30.336 02:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 68666 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:31.273 02:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:31.273 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:31.537 ************************************ 00:12:31.537 END TEST nvmf_fio_target 00:12:31.537 ************************************ 00:12:31.537 00:12:31.537 real 0m22.676s 00:12:31.537 user 1m24.133s 00:12:31.537 sys 0m10.798s 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.537 ************************************ 00:12:31.537 START TEST nvmf_bdevio 00:12:31.537 ************************************ 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:31.537 * Looking for test storage... 00:12:31.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:31.537 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:31.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.809 --rc genhtml_branch_coverage=1 00:12:31.809 --rc genhtml_function_coverage=1 00:12:31.809 --rc genhtml_legend=1 00:12:31.809 --rc geninfo_all_blocks=1 00:12:31.809 --rc geninfo_unexecuted_blocks=1 00:12:31.809 00:12:31.809 ' 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:31.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.809 --rc genhtml_branch_coverage=1 00:12:31.809 --rc genhtml_function_coverage=1 00:12:31.809 --rc genhtml_legend=1 00:12:31.809 --rc geninfo_all_blocks=1 00:12:31.809 --rc geninfo_unexecuted_blocks=1 00:12:31.809 00:12:31.809 ' 00:12:31.809 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:31.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.809 --rc genhtml_branch_coverage=1 00:12:31.809 --rc genhtml_function_coverage=1 00:12:31.809 --rc genhtml_legend=1 00:12:31.810 --rc geninfo_all_blocks=1 00:12:31.810 --rc geninfo_unexecuted_blocks=1 00:12:31.810 00:12:31.810 ' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.810 --rc genhtml_branch_coverage=1 00:12:31.810 --rc genhtml_function_coverage=1 00:12:31.810 --rc genhtml_legend=1 00:12:31.810 --rc geninfo_all_blocks=1 00:12:31.810 --rc geninfo_unexecuted_blocks=1 00:12:31.810 00:12:31.810 ' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.810 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:31.810 Cannot find device "nvmf_init_br" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:31.810 Cannot find device "nvmf_init_br2" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:31.810 Cannot find device "nvmf_tgt_br" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.810 Cannot find device "nvmf_tgt_br2" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:31.810 Cannot find device "nvmf_init_br" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:31.810 Cannot find device "nvmf_init_br2" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:31.810 Cannot find device "nvmf_tgt_br" 00:12:31.810 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:31.811 Cannot find device "nvmf_tgt_br2" 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:31.811 Cannot find device "nvmf_br" 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:31.811 Cannot find device "nvmf_init_if" 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:31.811 Cannot find device "nvmf_init_if2" 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.811 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.070 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:32.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:32.071 00:12:32.071 --- 10.0.0.3 ping statistics --- 00:12:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.071 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:32.071 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:32.071 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:12:32.071 00:12:32.071 --- 10.0.0.4 ping statistics --- 00:12:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.071 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:32.071 00:12:32.071 --- 10.0.0.1 ping statistics --- 00:12:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.071 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:32.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:12:32.071 00:12:32.071 --- 10.0.0.2 ping statistics --- 00:12:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.071 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.071 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=69439 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 69439 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 69439 ']' 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.330 02:57:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.330 [2024-12-05 02:57:03.029363] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:32.330 [2024-12-05 02:57:03.029521] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.589 [2024-12-05 02:57:03.205663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.589 [2024-12-05 02:57:03.340557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.589 [2024-12-05 02:57:03.340631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.589 [2024-12-05 02:57:03.340654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.589 [2024-12-05 02:57:03.340668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.589 [2024-12-05 02:57:03.340684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.590 [2024-12-05 02:57:03.343198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.590 [2024-12-05 02:57:03.343408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.590 [2024-12-05 02:57:03.343539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.590 [2024-12-05 02:57:03.344008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.849 [2024-12-05 02:57:03.542186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 [2024-12-05 02:57:04.120631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 Malloc0 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.416 [2024-12-05 02:57:04.240784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:33.416 { 00:12:33.416 "params": { 00:12:33.416 "name": "Nvme$subsystem", 00:12:33.416 "trtype": "$TEST_TRANSPORT", 00:12:33.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.416 "adrfam": "ipv4", 00:12:33.416 "trsvcid": "$NVMF_PORT", 00:12:33.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.416 "hdgst": ${hdgst:-false}, 00:12:33.416 "ddgst": ${ddgst:-false} 00:12:33.416 }, 00:12:33.416 "method": "bdev_nvme_attach_controller" 00:12:33.416 } 00:12:33.416 EOF 00:12:33.416 )") 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:33.416 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:33.675 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:33.675 02:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:33.675 "params": { 00:12:33.675 "name": "Nvme1", 00:12:33.675 "trtype": "tcp", 00:12:33.675 "traddr": "10.0.0.3", 00:12:33.675 "adrfam": "ipv4", 00:12:33.675 "trsvcid": "4420", 00:12:33.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.675 "hdgst": false, 00:12:33.675 "ddgst": false 00:12:33.675 }, 00:12:33.675 "method": "bdev_nvme_attach_controller" 00:12:33.675 }' 00:12:33.675 [2024-12-05 02:57:04.364953] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:33.675 [2024-12-05 02:57:04.365170] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69475 ] 00:12:33.933 [2024-12-05 02:57:04.555827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.934 [2024-12-05 02:57:04.686670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.934 [2024-12-05 02:57:04.686823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.934 [2024-12-05 02:57:04.686850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.191 [2024-12-05 02:57:04.892592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.462 I/O targets: 00:12:34.462 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:34.462 00:12:34.462 00:12:34.462 CUnit - A unit testing framework for C - Version 2.1-3 00:12:34.462 http://cunit.sourceforge.net/ 00:12:34.462 00:12:34.462 00:12:34.462 Suite: bdevio tests on: Nvme1n1 00:12:34.462 Test: blockdev write read block ...passed 00:12:34.462 Test: blockdev write zeroes read block ...passed 00:12:34.462 Test: blockdev write zeroes read no split ...passed 00:12:34.462 Test: blockdev write zeroes read split ...passed 00:12:34.462 Test: blockdev write zeroes read split partial ...passed 00:12:34.462 Test: blockdev reset ...[2024-12-05 02:57:05.157765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:34.462 [2024-12-05 02:57:05.157950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:12:34.462 [2024-12-05 02:57:05.178234] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:34.462 passed 00:12:34.462 Test: blockdev write read 8 blocks ...passed 00:12:34.462 Test: blockdev write read size > 128k ...passed 00:12:34.462 Test: blockdev write read invalid size ...passed 00:12:34.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.462 Test: blockdev write read max offset ...passed 00:12:34.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.462 Test: blockdev writev readv 8 blocks ...passed 00:12:34.462 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.462 Test: blockdev writev readv block ...passed 00:12:34.462 Test: blockdev writev readv size > 128k ...passed 00:12:34.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.462 Test: blockdev comparev and writev ...[2024-12-05 02:57:05.190921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.191002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.191035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.191055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.191426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.191466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.191494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.191529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.192008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.192047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.192076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.192099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.192460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.192505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.192533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.462 [2024-12-05 02:57:05.192553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:34.462 passed 00:12:34.462 Test: blockdev nvme passthru rw ...passed 00:12:34.462 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:57:05.193721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.462 [2024-12-05 02:57:05.193790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.193964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.462 [2024-12-05 02:57:05.194016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:34.462 passed 00:12:34.462 Test: blockdev nvme admin passthru ...[2024-12-05 02:57:05.194192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.462 [2024-12-05 02:57:05.194242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:34.462 [2024-12-05 02:57:05.194429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.462 [2024-12-05 02:57:05.194477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:34.462 passed 00:12:34.462 Test: blockdev copy ...passed 00:12:34.462 00:12:34.462 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.462 suites 1 1 n/a 0 0 00:12:34.462 tests 23 23 23 0 0 00:12:34.462 asserts 152 152 152 0 n/a 00:12:34.462 00:12:34.462 Elapsed time = 0.297 seconds 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.395 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.653 rmmod nvme_tcp 00:12:35.653 rmmod nvme_fabrics 00:12:35.653 rmmod nvme_keyring 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 69439 ']' 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 69439 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 69439 ']' 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 69439 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69439 00:12:35.653 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:35.654 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:35.654 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69439' 00:12:35.654 killing process with pid 69439 00:12:35.654 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 69439 00:12:35.654 02:57:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 69439 00:12:37.028 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.028 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:37.029 00:12:37.029 real 0m5.457s 00:12:37.029 user 0m20.039s 00:12:37.029 sys 0m1.084s 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 ************************************ 00:12:37.029 END TEST nvmf_bdevio 00:12:37.029 ************************************ 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:37.029 00:12:37.029 real 2m57.551s 00:12:37.029 user 7m55.260s 00:12:37.029 sys 0m53.188s 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 ************************************ 00:12:37.029 END TEST nvmf_target_core 00:12:37.029 ************************************ 00:12:37.029 02:57:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:37.029 02:57:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.029 02:57:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.029 02:57:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 ************************************ 00:12:37.029 START TEST nvmf_target_extra 00:12:37.029 ************************************ 00:12:37.029 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:37.289 * Looking for test storage... 00:12:37.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.289 --rc genhtml_branch_coverage=1 00:12:37.289 --rc genhtml_function_coverage=1 00:12:37.289 --rc genhtml_legend=1 00:12:37.289 --rc geninfo_all_blocks=1 00:12:37.289 --rc geninfo_unexecuted_blocks=1 00:12:37.289 00:12:37.289 ' 00:12:37.289 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.289 --rc genhtml_branch_coverage=1 00:12:37.290 --rc genhtml_function_coverage=1 00:12:37.290 --rc genhtml_legend=1 00:12:37.290 --rc geninfo_all_blocks=1 00:12:37.290 --rc geninfo_unexecuted_blocks=1 00:12:37.290 00:12:37.290 ' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.290 --rc genhtml_branch_coverage=1 00:12:37.290 --rc genhtml_function_coverage=1 00:12:37.290 --rc genhtml_legend=1 00:12:37.290 --rc geninfo_all_blocks=1 00:12:37.290 --rc geninfo_unexecuted_blocks=1 00:12:37.290 00:12:37.290 ' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.290 --rc genhtml_branch_coverage=1 00:12:37.290 --rc genhtml_function_coverage=1 00:12:37.290 --rc genhtml_legend=1 00:12:37.290 --rc geninfo_all_blocks=1 00:12:37.290 --rc geninfo_unexecuted_blocks=1 00:12:37.290 00:12:37.290 ' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.290 02:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 ************************************ 00:12:37.290 START TEST nvmf_auth_target 00:12:37.290 ************************************ 00:12:37.290 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:37.290 * Looking for test storage... 00:12:37.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.290 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.290 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.290 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.551 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.552 --rc genhtml_branch_coverage=1 00:12:37.552 --rc genhtml_function_coverage=1 00:12:37.552 --rc genhtml_legend=1 00:12:37.552 --rc geninfo_all_blocks=1 00:12:37.552 --rc geninfo_unexecuted_blocks=1 00:12:37.552 00:12:37.552 ' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.552 --rc genhtml_branch_coverage=1 00:12:37.552 --rc genhtml_function_coverage=1 00:12:37.552 --rc genhtml_legend=1 00:12:37.552 --rc geninfo_all_blocks=1 00:12:37.552 --rc geninfo_unexecuted_blocks=1 00:12:37.552 00:12:37.552 ' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.552 --rc genhtml_branch_coverage=1 00:12:37.552 --rc genhtml_function_coverage=1 00:12:37.552 --rc genhtml_legend=1 00:12:37.552 --rc geninfo_all_blocks=1 00:12:37.552 --rc geninfo_unexecuted_blocks=1 00:12:37.552 00:12:37.552 ' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.552 --rc genhtml_branch_coverage=1 00:12:37.552 --rc genhtml_function_coverage=1 00:12:37.552 --rc genhtml_legend=1 00:12:37.552 --rc geninfo_all_blocks=1 00:12:37.552 --rc geninfo_unexecuted_blocks=1 00:12:37.552 00:12:37.552 ' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.552 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.553 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:37.553 Cannot find device "nvmf_init_br" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:37.553 Cannot find device "nvmf_init_br2" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:37.553 Cannot find device "nvmf_tgt_br" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.553 Cannot find device "nvmf_tgt_br2" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:37.553 Cannot find device "nvmf_init_br" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:37.553 Cannot find device "nvmf_init_br2" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:37.553 Cannot find device "nvmf_tgt_br" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:37.553 Cannot find device "nvmf_tgt_br2" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:37.553 Cannot find device "nvmf_br" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:37.553 Cannot find device "nvmf_init_if" 00:12:37.553 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:37.554 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:37.554 Cannot find device "nvmf_init_if2" 00:12:37.554 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:37.554 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:37.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:12:37.813 00:12:37.813 --- 10.0.0.3 ping statistics --- 00:12:37.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.813 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:37.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:37.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:37.813 00:12:37.813 --- 10.0.0.4 ping statistics --- 00:12:37.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.813 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:37.813 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:37.813 00:12:37.813 --- 10.0.0.1 ping statistics --- 00:12:37.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.813 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:37.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:37.814 00:12:37.814 --- 10.0.0.2 ping statistics --- 00:12:37.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.814 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69827 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69827 00:12:37.814 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69827 ']' 00:12:38.073 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.073 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.073 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.073 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.073 02:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=69859 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:39.011 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fd1f74fc83a83409469d911e5f72e5d33367eaa797b4c585 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nvN 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fd1f74fc83a83409469d911e5f72e5d33367eaa797b4c585 0 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fd1f74fc83a83409469d911e5f72e5d33367eaa797b4c585 0 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fd1f74fc83a83409469d911e5f72e5d33367eaa797b4c585 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:39.012 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nvN 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nvN 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nvN 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c0387f18efc3fc205c4797a22ee0a01b0b7448e0f84cd7ecf7f79a8bcfe0d48d 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yd2 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c0387f18efc3fc205c4797a22ee0a01b0b7448e0f84cd7ecf7f79a8bcfe0d48d 3 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c0387f18efc3fc205c4797a22ee0a01b0b7448e0f84cd7ecf7f79a8bcfe0d48d 3 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c0387f18efc3fc205c4797a22ee0a01b0b7448e0f84cd7ecf7f79a8bcfe0d48d 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yd2 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yd2 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.yd2 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0ae54c3594b60cc51ef83e677d134272 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.l6h 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0ae54c3594b60cc51ef83e677d134272 1 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0ae54c3594b60cc51ef83e677d134272 1 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0ae54c3594b60cc51ef83e677d134272 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.l6h 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.l6h 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.l6h 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=016868fba4925f4c5d8d573f221de2254dacf2af58b2912c 00:12:39.272 02:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QFo 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 016868fba4925f4c5d8d573f221de2254dacf2af58b2912c 2 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 016868fba4925f4c5d8d573f221de2254dacf2af58b2912c 2 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=016868fba4925f4c5d8d573f221de2254dacf2af58b2912c 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QFo 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QFo 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.QFo 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f185f80d31cc836eba7db87d86705fa11a207d497dc475b 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4ij 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f185f80d31cc836eba7db87d86705fa11a207d497dc475b 2 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f185f80d31cc836eba7db87d86705fa11a207d497dc475b 2 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f185f80d31cc836eba7db87d86705fa11a207d497dc475b 00:12:39.272 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:39.273 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4ij 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4ij 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4ij 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=46ed60c1636a7b37498209a9bac0b396 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1vm 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 46ed60c1636a7b37498209a9bac0b396 1 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 46ed60c1636a7b37498209a9bac0b396 1 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=46ed60c1636a7b37498209a9bac0b396 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1vm 00:12:39.532 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1vm 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1vm 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bf2408670a740f1e28f5553b8ca0e6a5170a80bd906dbc01c0a4908b931ad77a 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EUe 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bf2408670a740f1e28f5553b8ca0e6a5170a80bd906dbc01c0a4908b931ad77a 3 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bf2408670a740f1e28f5553b8ca0e6a5170a80bd906dbc01c0a4908b931ad77a 3 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bf2408670a740f1e28f5553b8ca0e6a5170a80bd906dbc01c0a4908b931ad77a 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EUe 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EUe 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.EUe 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 69827 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69827 ']' 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.533 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 69859 /var/tmp/host.sock 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69859 ']' 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.792 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.359 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.359 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nvN 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nvN 00:12:40.360 02:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nvN 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.yd2 ]] 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yd2 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yd2 00:12:40.620 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yd2 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.l6h 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.l6h 00:12:40.879 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.l6h 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.QFo ]] 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QFo 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QFo 00:12:41.139 02:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QFo 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4ij 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4ij 00:12:41.398 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4ij 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1vm ]] 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vm 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vm 00:12:41.657 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vm 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EUe 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EUe 00:12:41.917 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EUe 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:42.177 02:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.437 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.696 00:12:42.696 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.696 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.696 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.955 { 00:12:42.955 "cntlid": 1, 00:12:42.955 "qid": 0, 00:12:42.955 "state": "enabled", 00:12:42.955 "thread": "nvmf_tgt_poll_group_000", 00:12:42.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:42.955 "listen_address": { 00:12:42.955 "trtype": "TCP", 00:12:42.955 "adrfam": "IPv4", 00:12:42.955 "traddr": "10.0.0.3", 00:12:42.955 "trsvcid": "4420" 00:12:42.955 }, 00:12:42.955 "peer_address": { 00:12:42.955 "trtype": "TCP", 00:12:42.955 "adrfam": "IPv4", 00:12:42.955 "traddr": "10.0.0.1", 00:12:42.955 "trsvcid": "37030" 00:12:42.955 }, 00:12:42.955 "auth": { 00:12:42.955 "state": "completed", 00:12:42.955 "digest": "sha256", 00:12:42.955 "dhgroup": "null" 00:12:42.955 } 00:12:42.955 } 00:12:42.955 ]' 00:12:42.955 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.212 02:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.469 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:12:43.469 02:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.721 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.979 00:12:47.979 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.979 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.979 02:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.547 { 00:12:48.547 "cntlid": 3, 00:12:48.547 "qid": 0, 00:12:48.547 "state": "enabled", 00:12:48.547 "thread": "nvmf_tgt_poll_group_000", 00:12:48.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:48.547 "listen_address": { 00:12:48.547 "trtype": "TCP", 00:12:48.547 "adrfam": "IPv4", 00:12:48.547 "traddr": "10.0.0.3", 00:12:48.547 "trsvcid": "4420" 00:12:48.547 }, 00:12:48.547 "peer_address": { 00:12:48.547 "trtype": "TCP", 00:12:48.547 "adrfam": "IPv4", 00:12:48.547 "traddr": "10.0.0.1", 00:12:48.547 "trsvcid": "37072" 00:12:48.547 }, 00:12:48.547 "auth": { 00:12:48.547 "state": "completed", 00:12:48.547 "digest": "sha256", 00:12:48.547 "dhgroup": "null" 00:12:48.547 } 00:12:48.547 } 00:12:48.547 ]' 00:12:48.547 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.548 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.807 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:12:48.807 02:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:49.375 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.634 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.893 00:12:49.893 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.893 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.893 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.152 { 00:12:50.152 "cntlid": 5, 00:12:50.152 "qid": 0, 00:12:50.152 "state": "enabled", 00:12:50.152 "thread": "nvmf_tgt_poll_group_000", 00:12:50.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:50.152 "listen_address": { 00:12:50.152 "trtype": "TCP", 00:12:50.152 "adrfam": "IPv4", 00:12:50.152 "traddr": "10.0.0.3", 00:12:50.152 "trsvcid": "4420" 00:12:50.152 }, 00:12:50.152 "peer_address": { 00:12:50.152 "trtype": "TCP", 00:12:50.152 "adrfam": "IPv4", 00:12:50.152 "traddr": "10.0.0.1", 00:12:50.152 "trsvcid": "37098" 00:12:50.152 }, 00:12:50.152 "auth": { 00:12:50.152 "state": "completed", 00:12:50.152 "digest": "sha256", 00:12:50.152 "dhgroup": "null" 00:12:50.152 } 00:12:50.152 } 00:12:50.152 ]' 00:12:50.152 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.411 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.411 02:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.411 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:50.411 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.411 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.411 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.411 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:12:50.670 02:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:12:51.607 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.608 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.867 00:12:51.867 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.867 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.867 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.434 { 00:12:52.434 "cntlid": 7, 00:12:52.434 "qid": 0, 00:12:52.434 "state": "enabled", 00:12:52.434 "thread": "nvmf_tgt_poll_group_000", 00:12:52.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:52.434 "listen_address": { 00:12:52.434 "trtype": "TCP", 00:12:52.434 "adrfam": "IPv4", 00:12:52.434 "traddr": "10.0.0.3", 00:12:52.434 "trsvcid": "4420" 00:12:52.434 }, 00:12:52.434 "peer_address": { 00:12:52.434 "trtype": "TCP", 00:12:52.434 "adrfam": "IPv4", 00:12:52.434 "traddr": "10.0.0.1", 00:12:52.434 "trsvcid": "33012" 00:12:52.434 }, 00:12:52.434 "auth": { 00:12:52.434 "state": "completed", 00:12:52.434 "digest": "sha256", 00:12:52.434 "dhgroup": "null" 00:12:52.434 } 00:12:52.434 } 00:12:52.434 ]' 00:12:52.434 02:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.434 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.693 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:12:52.693 02:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:53.260 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.519 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.086 00:12:54.086 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.086 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.086 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.345 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.345 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.345 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.345 02:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.345 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.345 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.345 { 00:12:54.345 "cntlid": 9, 00:12:54.345 "qid": 0, 00:12:54.345 "state": "enabled", 00:12:54.345 "thread": "nvmf_tgt_poll_group_000", 00:12:54.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:54.345 "listen_address": { 00:12:54.345 "trtype": "TCP", 00:12:54.346 "adrfam": "IPv4", 00:12:54.346 "traddr": "10.0.0.3", 00:12:54.346 "trsvcid": "4420" 00:12:54.346 }, 00:12:54.346 "peer_address": { 00:12:54.346 "trtype": "TCP", 00:12:54.346 "adrfam": "IPv4", 00:12:54.346 "traddr": "10.0.0.1", 00:12:54.346 "trsvcid": "33032" 00:12:54.346 }, 00:12:54.346 "auth": { 00:12:54.346 "state": "completed", 00:12:54.346 "digest": "sha256", 00:12:54.346 "dhgroup": "ffdhe2048" 00:12:54.346 } 00:12:54.346 } 00:12:54.346 ]' 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.346 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.604 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:12:54.604 02:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.537 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.538 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.538 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.105 00:12:56.105 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.105 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.105 02:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.365 { 00:12:56.365 "cntlid": 11, 00:12:56.365 "qid": 0, 00:12:56.365 "state": "enabled", 00:12:56.365 "thread": "nvmf_tgt_poll_group_000", 00:12:56.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:56.365 "listen_address": { 00:12:56.365 "trtype": "TCP", 00:12:56.365 "adrfam": "IPv4", 00:12:56.365 "traddr": "10.0.0.3", 00:12:56.365 "trsvcid": "4420" 00:12:56.365 }, 00:12:56.365 "peer_address": { 00:12:56.365 "trtype": "TCP", 00:12:56.365 "adrfam": "IPv4", 00:12:56.365 "traddr": "10.0.0.1", 00:12:56.365 "trsvcid": "33074" 00:12:56.365 }, 00:12:56.365 "auth": { 00:12:56.365 "state": "completed", 00:12:56.365 "digest": "sha256", 00:12:56.365 "dhgroup": "ffdhe2048" 00:12:56.365 } 00:12:56.365 } 00:12:56.365 ]' 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.365 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.932 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:12:56.933 02:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:57.500 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.759 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.018 00:12:58.018 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.018 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.018 02:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.277 { 00:12:58.277 "cntlid": 13, 00:12:58.277 "qid": 0, 00:12:58.277 "state": "enabled", 00:12:58.277 "thread": "nvmf_tgt_poll_group_000", 00:12:58.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:12:58.277 "listen_address": { 00:12:58.277 "trtype": "TCP", 00:12:58.277 "adrfam": "IPv4", 00:12:58.277 "traddr": "10.0.0.3", 00:12:58.277 "trsvcid": "4420" 00:12:58.277 }, 00:12:58.277 "peer_address": { 00:12:58.277 "trtype": "TCP", 00:12:58.277 "adrfam": "IPv4", 00:12:58.277 "traddr": "10.0.0.1", 00:12:58.277 "trsvcid": "33106" 00:12:58.277 }, 00:12:58.277 "auth": { 00:12:58.277 "state": "completed", 00:12:58.277 "digest": "sha256", 00:12:58.277 "dhgroup": "ffdhe2048" 00:12:58.277 } 00:12:58.277 } 00:12:58.277 ]' 00:12:58.277 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.536 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.794 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:12:58.794 02:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.751 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.318 00:13:00.318 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.318 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.318 02:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.318 { 00:13:00.318 "cntlid": 15, 00:13:00.318 "qid": 0, 00:13:00.318 "state": "enabled", 00:13:00.318 "thread": "nvmf_tgt_poll_group_000", 00:13:00.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:00.318 "listen_address": { 00:13:00.318 "trtype": "TCP", 00:13:00.318 "adrfam": "IPv4", 00:13:00.318 "traddr": "10.0.0.3", 00:13:00.318 "trsvcid": "4420" 00:13:00.318 }, 00:13:00.318 "peer_address": { 00:13:00.318 "trtype": "TCP", 00:13:00.318 "adrfam": "IPv4", 00:13:00.318 "traddr": "10.0.0.1", 00:13:00.318 "trsvcid": "33140" 00:13:00.318 }, 00:13:00.318 "auth": { 00:13:00.318 "state": "completed", 00:13:00.318 "digest": "sha256", 00:13:00.318 "dhgroup": "ffdhe2048" 00:13:00.318 } 00:13:00.318 } 00:13:00.318 ]' 00:13:00.318 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.576 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.835 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:00.835 02:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:01.402 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.660 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.225 00:13:02.225 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.225 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.225 02:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.483 { 00:13:02.483 "cntlid": 17, 00:13:02.483 "qid": 0, 00:13:02.483 "state": "enabled", 00:13:02.483 "thread": "nvmf_tgt_poll_group_000", 00:13:02.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:02.483 "listen_address": { 00:13:02.483 "trtype": "TCP", 00:13:02.483 "adrfam": "IPv4", 00:13:02.483 "traddr": "10.0.0.3", 00:13:02.483 "trsvcid": "4420" 00:13:02.483 }, 00:13:02.483 "peer_address": { 00:13:02.483 "trtype": "TCP", 00:13:02.483 "adrfam": "IPv4", 00:13:02.483 "traddr": "10.0.0.1", 00:13:02.483 "trsvcid": "43018" 00:13:02.483 }, 00:13:02.483 "auth": { 00:13:02.483 "state": "completed", 00:13:02.483 "digest": "sha256", 00:13:02.483 "dhgroup": "ffdhe3072" 00:13:02.483 } 00:13:02.483 } 00:13:02.483 ]' 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.483 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.741 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:02.741 02:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.677 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.678 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.678 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.936 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.936 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.936 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.936 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.195 00:13:04.195 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.195 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.195 02:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.463 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.463 { 00:13:04.463 "cntlid": 19, 00:13:04.463 "qid": 0, 00:13:04.463 "state": "enabled", 00:13:04.463 "thread": "nvmf_tgt_poll_group_000", 00:13:04.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:04.463 "listen_address": { 00:13:04.463 "trtype": "TCP", 00:13:04.463 "adrfam": "IPv4", 00:13:04.463 "traddr": "10.0.0.3", 00:13:04.463 "trsvcid": "4420" 00:13:04.463 }, 00:13:04.464 "peer_address": { 00:13:04.464 "trtype": "TCP", 00:13:04.464 "adrfam": "IPv4", 00:13:04.464 "traddr": "10.0.0.1", 00:13:04.464 "trsvcid": "43060" 00:13:04.464 }, 00:13:04.464 "auth": { 00:13:04.464 "state": "completed", 00:13:04.464 "digest": "sha256", 00:13:04.464 "dhgroup": "ffdhe3072" 00:13:04.464 } 00:13:04.464 } 00:13:04.464 ]' 00:13:04.464 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.464 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.464 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.725 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.725 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.725 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.725 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.725 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.983 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:04.983 02:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:05.551 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.811 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.071 00:13:06.330 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.330 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.330 02:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.590 { 00:13:06.590 "cntlid": 21, 00:13:06.590 "qid": 0, 00:13:06.590 "state": "enabled", 00:13:06.590 "thread": "nvmf_tgt_poll_group_000", 00:13:06.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:06.590 "listen_address": { 00:13:06.590 "trtype": "TCP", 00:13:06.590 "adrfam": "IPv4", 00:13:06.590 "traddr": "10.0.0.3", 00:13:06.590 "trsvcid": "4420" 00:13:06.590 }, 00:13:06.590 "peer_address": { 00:13:06.590 "trtype": "TCP", 00:13:06.590 "adrfam": "IPv4", 00:13:06.590 "traddr": "10.0.0.1", 00:13:06.590 "trsvcid": "43088" 00:13:06.590 }, 00:13:06.590 "auth": { 00:13:06.590 "state": "completed", 00:13:06.590 "digest": "sha256", 00:13:06.590 "dhgroup": "ffdhe3072" 00:13:06.590 } 00:13:06.590 } 00:13:06.590 ]' 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.590 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.850 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:06.850 02:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:07.788 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.048 02:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.308 00:13:08.308 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.308 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.308 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.567 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.567 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.567 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.567 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.827 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.827 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.827 { 00:13:08.827 "cntlid": 23, 00:13:08.827 "qid": 0, 00:13:08.827 "state": "enabled", 00:13:08.827 "thread": "nvmf_tgt_poll_group_000", 00:13:08.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:08.827 "listen_address": { 00:13:08.827 "trtype": "TCP", 00:13:08.827 "adrfam": "IPv4", 00:13:08.827 "traddr": "10.0.0.3", 00:13:08.827 "trsvcid": "4420" 00:13:08.827 }, 00:13:08.827 "peer_address": { 00:13:08.827 "trtype": "TCP", 00:13:08.827 "adrfam": "IPv4", 00:13:08.827 "traddr": "10.0.0.1", 00:13:08.827 "trsvcid": "43122" 00:13:08.827 }, 00:13:08.827 "auth": { 00:13:08.827 "state": "completed", 00:13:08.827 "digest": "sha256", 00:13:08.827 "dhgroup": "ffdhe3072" 00:13:08.827 } 00:13:08.827 } 00:13:08.827 ]' 00:13:08.827 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.827 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.827 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.828 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:08.828 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.828 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.828 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.828 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.087 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:09.087 02:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:09.655 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:09.914 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.173 02:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.433 00:13:10.433 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.433 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.433 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.692 { 00:13:10.692 "cntlid": 25, 00:13:10.692 "qid": 0, 00:13:10.692 "state": "enabled", 00:13:10.692 "thread": "nvmf_tgt_poll_group_000", 00:13:10.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:10.692 "listen_address": { 00:13:10.692 "trtype": "TCP", 00:13:10.692 "adrfam": "IPv4", 00:13:10.692 "traddr": "10.0.0.3", 00:13:10.692 "trsvcid": "4420" 00:13:10.692 }, 00:13:10.692 "peer_address": { 00:13:10.692 "trtype": "TCP", 00:13:10.692 "adrfam": "IPv4", 00:13:10.692 "traddr": "10.0.0.1", 00:13:10.692 "trsvcid": "50946" 00:13:10.692 }, 00:13:10.692 "auth": { 00:13:10.692 "state": "completed", 00:13:10.692 "digest": "sha256", 00:13:10.692 "dhgroup": "ffdhe4096" 00:13:10.692 } 00:13:10.692 } 00:13:10.692 ]' 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.692 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.951 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.951 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.951 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.951 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.951 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.211 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:11.211 02:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:11.779 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.038 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.297 02:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.557 00:13:12.557 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.557 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.557 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.816 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.816 { 00:13:12.816 "cntlid": 27, 00:13:12.816 "qid": 0, 00:13:12.816 "state": "enabled", 00:13:12.817 "thread": "nvmf_tgt_poll_group_000", 00:13:12.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:12.817 "listen_address": { 00:13:12.817 "trtype": "TCP", 00:13:12.817 "adrfam": "IPv4", 00:13:12.817 "traddr": "10.0.0.3", 00:13:12.817 "trsvcid": "4420" 00:13:12.817 }, 00:13:12.817 "peer_address": { 00:13:12.817 "trtype": "TCP", 00:13:12.817 "adrfam": "IPv4", 00:13:12.817 "traddr": "10.0.0.1", 00:13:12.817 "trsvcid": "50978" 00:13:12.817 }, 00:13:12.817 "auth": { 00:13:12.817 "state": "completed", 00:13:12.817 "digest": "sha256", 00:13:12.817 "dhgroup": "ffdhe4096" 00:13:12.817 } 00:13:12.817 } 00:13:12.817 ]' 00:13:12.817 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.104 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.425 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:13.425 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:13.993 02:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.253 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.819 00:13:14.819 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.819 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.819 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.077 { 00:13:15.077 "cntlid": 29, 00:13:15.077 "qid": 0, 00:13:15.077 "state": "enabled", 00:13:15.077 "thread": "nvmf_tgt_poll_group_000", 00:13:15.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:15.077 "listen_address": { 00:13:15.077 "trtype": "TCP", 00:13:15.077 "adrfam": "IPv4", 00:13:15.077 "traddr": "10.0.0.3", 00:13:15.077 "trsvcid": "4420" 00:13:15.077 }, 00:13:15.077 "peer_address": { 00:13:15.077 "trtype": "TCP", 00:13:15.077 "adrfam": "IPv4", 00:13:15.077 "traddr": "10.0.0.1", 00:13:15.077 "trsvcid": "51008" 00:13:15.077 }, 00:13:15.077 "auth": { 00:13:15.077 "state": "completed", 00:13:15.077 "digest": "sha256", 00:13:15.077 "dhgroup": "ffdhe4096" 00:13:15.077 } 00:13:15.077 } 00:13:15.077 ]' 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.077 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.336 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.336 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.336 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.336 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.336 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.594 02:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:15.594 02:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.530 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.097 00:13:17.097 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.097 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.097 02:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.355 { 00:13:17.355 "cntlid": 31, 00:13:17.355 "qid": 0, 00:13:17.355 "state": "enabled", 00:13:17.355 "thread": "nvmf_tgt_poll_group_000", 00:13:17.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:17.355 "listen_address": { 00:13:17.355 "trtype": "TCP", 00:13:17.355 "adrfam": "IPv4", 00:13:17.355 "traddr": "10.0.0.3", 00:13:17.355 "trsvcid": "4420" 00:13:17.355 }, 00:13:17.355 "peer_address": { 00:13:17.355 "trtype": "TCP", 00:13:17.355 "adrfam": "IPv4", 00:13:17.355 "traddr": "10.0.0.1", 00:13:17.355 "trsvcid": "51022" 00:13:17.355 }, 00:13:17.355 "auth": { 00:13:17.355 "state": "completed", 00:13:17.355 "digest": "sha256", 00:13:17.355 "dhgroup": "ffdhe4096" 00:13:17.355 } 00:13:17.355 } 00:13:17.355 ]' 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.355 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.613 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.613 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.613 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.613 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.613 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.871 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:17.871 02:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:18.439 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.698 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.264 00:13:19.264 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.264 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.264 02:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.522 { 00:13:19.522 "cntlid": 33, 00:13:19.522 "qid": 0, 00:13:19.522 "state": "enabled", 00:13:19.522 "thread": "nvmf_tgt_poll_group_000", 00:13:19.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:19.522 "listen_address": { 00:13:19.522 "trtype": "TCP", 00:13:19.522 "adrfam": "IPv4", 00:13:19.522 "traddr": "10.0.0.3", 00:13:19.522 "trsvcid": "4420" 00:13:19.522 }, 00:13:19.522 "peer_address": { 00:13:19.522 "trtype": "TCP", 00:13:19.522 "adrfam": "IPv4", 00:13:19.522 "traddr": "10.0.0.1", 00:13:19.522 "trsvcid": "51048" 00:13:19.522 }, 00:13:19.522 "auth": { 00:13:19.522 "state": "completed", 00:13:19.522 "digest": "sha256", 00:13:19.522 "dhgroup": "ffdhe6144" 00:13:19.522 } 00:13:19.522 } 00:13:19.522 ]' 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.522 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.782 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:19.782 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.782 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.782 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.782 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.042 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:20.042 02:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.610 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.897 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.466 00:13:21.466 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.466 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.466 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.726 { 00:13:21.726 "cntlid": 35, 00:13:21.726 "qid": 0, 00:13:21.726 "state": "enabled", 00:13:21.726 "thread": "nvmf_tgt_poll_group_000", 00:13:21.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:21.726 "listen_address": { 00:13:21.726 "trtype": "TCP", 00:13:21.726 "adrfam": "IPv4", 00:13:21.726 "traddr": "10.0.0.3", 00:13:21.726 "trsvcid": "4420" 00:13:21.726 }, 00:13:21.726 "peer_address": { 00:13:21.726 "trtype": "TCP", 00:13:21.726 "adrfam": "IPv4", 00:13:21.726 "traddr": "10.0.0.1", 00:13:21.726 "trsvcid": "43350" 00:13:21.726 }, 00:13:21.726 "auth": { 00:13:21.726 "state": "completed", 00:13:21.726 "digest": "sha256", 00:13:21.726 "dhgroup": "ffdhe6144" 00:13:21.726 } 00:13:21.726 } 00:13:21.726 ]' 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.726 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.985 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:21.985 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:22.552 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.810 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:22.810 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.810 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.810 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.810 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.811 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:22.811 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.070 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.638 00:13:23.638 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.638 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.638 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.897 { 00:13:23.897 "cntlid": 37, 00:13:23.897 "qid": 0, 00:13:23.897 "state": "enabled", 00:13:23.897 "thread": "nvmf_tgt_poll_group_000", 00:13:23.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:23.897 "listen_address": { 00:13:23.897 "trtype": "TCP", 00:13:23.897 "adrfam": "IPv4", 00:13:23.897 "traddr": "10.0.0.3", 00:13:23.897 "trsvcid": "4420" 00:13:23.897 }, 00:13:23.897 "peer_address": { 00:13:23.897 "trtype": "TCP", 00:13:23.897 "adrfam": "IPv4", 00:13:23.897 "traddr": "10.0.0.1", 00:13:23.897 "trsvcid": "43384" 00:13:23.897 }, 00:13:23.897 "auth": { 00:13:23.897 "state": "completed", 00:13:23.897 "digest": "sha256", 00:13:23.897 "dhgroup": "ffdhe6144" 00:13:23.897 } 00:13:23.897 } 00:13:23.897 ]' 00:13:23.897 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.898 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.156 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:24.156 02:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:25.091 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.349 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.916 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.916 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.917 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.917 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.917 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.917 { 00:13:25.917 "cntlid": 39, 00:13:25.917 "qid": 0, 00:13:25.917 "state": "enabled", 00:13:25.917 "thread": "nvmf_tgt_poll_group_000", 00:13:25.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:25.917 "listen_address": { 00:13:25.917 "trtype": "TCP", 00:13:25.917 "adrfam": "IPv4", 00:13:25.917 "traddr": "10.0.0.3", 00:13:25.917 "trsvcid": "4420" 00:13:25.917 }, 00:13:25.917 "peer_address": { 00:13:25.917 "trtype": "TCP", 00:13:25.917 "adrfam": "IPv4", 00:13:25.917 "traddr": "10.0.0.1", 00:13:25.917 "trsvcid": "43414" 00:13:25.917 }, 00:13:25.917 "auth": { 00:13:25.917 "state": "completed", 00:13:25.917 "digest": "sha256", 00:13:25.917 "dhgroup": "ffdhe6144" 00:13:25.917 } 00:13:25.917 } 00:13:25.917 ]' 00:13:25.917 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.176 02:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.434 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:26.434 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:27.020 02:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.280 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.539 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.539 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.540 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.540 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.108 00:13:28.108 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.108 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.108 02:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.368 { 00:13:28.368 "cntlid": 41, 00:13:28.368 "qid": 0, 00:13:28.368 "state": "enabled", 00:13:28.368 "thread": "nvmf_tgt_poll_group_000", 00:13:28.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:28.368 "listen_address": { 00:13:28.368 "trtype": "TCP", 00:13:28.368 "adrfam": "IPv4", 00:13:28.368 "traddr": "10.0.0.3", 00:13:28.368 "trsvcid": "4420" 00:13:28.368 }, 00:13:28.368 "peer_address": { 00:13:28.368 "trtype": "TCP", 00:13:28.368 "adrfam": "IPv4", 00:13:28.368 "traddr": "10.0.0.1", 00:13:28.368 "trsvcid": "43438" 00:13:28.368 }, 00:13:28.368 "auth": { 00:13:28.368 "state": "completed", 00:13:28.368 "digest": "sha256", 00:13:28.368 "dhgroup": "ffdhe8192" 00:13:28.368 } 00:13:28.368 } 00:13:28.368 ]' 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.368 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.938 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:28.938 02:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:29.509 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.509 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:29.510 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.768 02:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.334 00:13:30.334 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.334 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.334 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.593 { 00:13:30.593 "cntlid": 43, 00:13:30.593 "qid": 0, 00:13:30.593 "state": "enabled", 00:13:30.593 "thread": "nvmf_tgt_poll_group_000", 00:13:30.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:30.593 "listen_address": { 00:13:30.593 "trtype": "TCP", 00:13:30.593 "adrfam": "IPv4", 00:13:30.593 "traddr": "10.0.0.3", 00:13:30.593 "trsvcid": "4420" 00:13:30.593 }, 00:13:30.593 "peer_address": { 00:13:30.593 "trtype": "TCP", 00:13:30.593 "adrfam": "IPv4", 00:13:30.593 "traddr": "10.0.0.1", 00:13:30.593 "trsvcid": "43468" 00:13:30.593 }, 00:13:30.593 "auth": { 00:13:30.593 "state": "completed", 00:13:30.593 "digest": "sha256", 00:13:30.593 "dhgroup": "ffdhe8192" 00:13:30.593 } 00:13:30.593 } 00:13:30.593 ]' 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.593 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.852 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.852 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.852 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.852 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.852 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.111 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:31.111 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.679 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.938 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.939 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.939 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.939 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.939 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.506 00:13:32.506 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.506 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.506 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.766 { 00:13:32.766 "cntlid": 45, 00:13:32.766 "qid": 0, 00:13:32.766 "state": "enabled", 00:13:32.766 "thread": "nvmf_tgt_poll_group_000", 00:13:32.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:32.766 "listen_address": { 00:13:32.766 "trtype": "TCP", 00:13:32.766 "adrfam": "IPv4", 00:13:32.766 "traddr": "10.0.0.3", 00:13:32.766 "trsvcid": "4420" 00:13:32.766 }, 00:13:32.766 "peer_address": { 00:13:32.766 "trtype": "TCP", 00:13:32.766 "adrfam": "IPv4", 00:13:32.766 "traddr": "10.0.0.1", 00:13:32.766 "trsvcid": "43858" 00:13:32.766 }, 00:13:32.766 "auth": { 00:13:32.766 "state": "completed", 00:13:32.766 "digest": "sha256", 00:13:32.766 "dhgroup": "ffdhe8192" 00:13:32.766 } 00:13:32.766 } 00:13:32.766 ]' 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.766 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.025 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:33.025 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.025 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.025 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.025 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.284 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:33.284 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.852 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.420 02:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.985 00:13:34.985 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.986 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.986 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.243 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.244 { 00:13:35.244 "cntlid": 47, 00:13:35.244 "qid": 0, 00:13:35.244 "state": "enabled", 00:13:35.244 "thread": "nvmf_tgt_poll_group_000", 00:13:35.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:35.244 "listen_address": { 00:13:35.244 "trtype": "TCP", 00:13:35.244 "adrfam": "IPv4", 00:13:35.244 "traddr": "10.0.0.3", 00:13:35.244 "trsvcid": "4420" 00:13:35.244 }, 00:13:35.244 "peer_address": { 00:13:35.244 "trtype": "TCP", 00:13:35.244 "adrfam": "IPv4", 00:13:35.244 "traddr": "10.0.0.1", 00:13:35.244 "trsvcid": "43894" 00:13:35.244 }, 00:13:35.244 "auth": { 00:13:35.244 "state": "completed", 00:13:35.244 "digest": "sha256", 00:13:35.244 "dhgroup": "ffdhe8192" 00:13:35.244 } 00:13:35.244 } 00:13:35.244 ]' 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.244 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.244 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.244 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.244 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.502 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:35.502 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:36.436 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:36.436 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.694 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.952 00:13:36.952 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.952 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.952 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.211 { 00:13:37.211 "cntlid": 49, 00:13:37.211 "qid": 0, 00:13:37.211 "state": "enabled", 00:13:37.211 "thread": "nvmf_tgt_poll_group_000", 00:13:37.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:37.211 "listen_address": { 00:13:37.211 "trtype": "TCP", 00:13:37.211 "adrfam": "IPv4", 00:13:37.211 "traddr": "10.0.0.3", 00:13:37.211 "trsvcid": "4420" 00:13:37.211 }, 00:13:37.211 "peer_address": { 00:13:37.211 "trtype": "TCP", 00:13:37.211 "adrfam": "IPv4", 00:13:37.211 "traddr": "10.0.0.1", 00:13:37.211 "trsvcid": "43926" 00:13:37.211 }, 00:13:37.211 "auth": { 00:13:37.211 "state": "completed", 00:13:37.211 "digest": "sha384", 00:13:37.211 "dhgroup": "null" 00:13:37.211 } 00:13:37.211 } 00:13:37.211 ]' 00:13:37.211 02:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.211 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.211 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.470 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:37.470 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.470 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.470 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.470 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.729 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:37.729 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.300 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.559 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.126 00:13:39.126 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.127 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.127 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.386 { 00:13:39.386 "cntlid": 51, 00:13:39.386 "qid": 0, 00:13:39.386 "state": "enabled", 00:13:39.386 "thread": "nvmf_tgt_poll_group_000", 00:13:39.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:39.386 "listen_address": { 00:13:39.386 "trtype": "TCP", 00:13:39.386 "adrfam": "IPv4", 00:13:39.386 "traddr": "10.0.0.3", 00:13:39.386 "trsvcid": "4420" 00:13:39.386 }, 00:13:39.386 "peer_address": { 00:13:39.386 "trtype": "TCP", 00:13:39.386 "adrfam": "IPv4", 00:13:39.386 "traddr": "10.0.0.1", 00:13:39.386 "trsvcid": "43948" 00:13:39.386 }, 00:13:39.386 "auth": { 00:13:39.386 "state": "completed", 00:13:39.386 "digest": "sha384", 00:13:39.386 "dhgroup": "null" 00:13:39.386 } 00:13:39.386 } 00:13:39.386 ]' 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.386 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.645 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:39.645 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:40.212 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.212 02:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.212 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.471 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.731 00:13:40.989 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.989 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.989 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.248 { 00:13:41.248 "cntlid": 53, 00:13:41.248 "qid": 0, 00:13:41.248 "state": "enabled", 00:13:41.248 "thread": "nvmf_tgt_poll_group_000", 00:13:41.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:41.248 "listen_address": { 00:13:41.248 "trtype": "TCP", 00:13:41.248 "adrfam": "IPv4", 00:13:41.248 "traddr": "10.0.0.3", 00:13:41.248 "trsvcid": "4420" 00:13:41.248 }, 00:13:41.248 "peer_address": { 00:13:41.248 "trtype": "TCP", 00:13:41.248 "adrfam": "IPv4", 00:13:41.248 "traddr": "10.0.0.1", 00:13:41.248 "trsvcid": "43276" 00:13:41.248 }, 00:13:41.248 "auth": { 00:13:41.248 "state": "completed", 00:13:41.248 "digest": "sha384", 00:13:41.248 "dhgroup": "null" 00:13:41.248 } 00:13:41.248 } 00:13:41.248 ]' 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:41.248 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.248 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.248 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.248 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.507 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:41.507 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.445 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.705 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.964 00:13:42.964 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.964 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.964 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.224 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.224 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.224 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.224 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.224 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.224 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.224 { 00:13:43.224 "cntlid": 55, 00:13:43.224 "qid": 0, 00:13:43.224 "state": "enabled", 00:13:43.224 "thread": "nvmf_tgt_poll_group_000", 00:13:43.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:43.224 "listen_address": { 00:13:43.224 "trtype": "TCP", 00:13:43.224 "adrfam": "IPv4", 00:13:43.224 "traddr": "10.0.0.3", 00:13:43.224 "trsvcid": "4420" 00:13:43.224 }, 00:13:43.224 "peer_address": { 00:13:43.224 "trtype": "TCP", 00:13:43.224 "adrfam": "IPv4", 00:13:43.224 "traddr": "10.0.0.1", 00:13:43.224 "trsvcid": "43322" 00:13:43.224 }, 00:13:43.224 "auth": { 00:13:43.224 "state": "completed", 00:13:43.224 "digest": "sha384", 00:13:43.224 "dhgroup": "null" 00:13:43.224 } 00:13:43.224 } 00:13:43.224 ]' 00:13:43.224 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.224 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.483 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.742 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:43.742 02:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.311 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.569 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.134 00:13:45.134 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.134 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.134 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.392 { 00:13:45.392 "cntlid": 57, 00:13:45.392 "qid": 0, 00:13:45.392 "state": "enabled", 00:13:45.392 "thread": "nvmf_tgt_poll_group_000", 00:13:45.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:45.392 "listen_address": { 00:13:45.392 "trtype": "TCP", 00:13:45.392 "adrfam": "IPv4", 00:13:45.392 "traddr": "10.0.0.3", 00:13:45.392 "trsvcid": "4420" 00:13:45.392 }, 00:13:45.392 "peer_address": { 00:13:45.392 "trtype": "TCP", 00:13:45.392 "adrfam": "IPv4", 00:13:45.392 "traddr": "10.0.0.1", 00:13:45.392 "trsvcid": "43352" 00:13:45.392 }, 00:13:45.392 "auth": { 00:13:45.392 "state": "completed", 00:13:45.392 "digest": "sha384", 00:13:45.392 "dhgroup": "ffdhe2048" 00:13:45.392 } 00:13:45.392 } 00:13:45.392 ]' 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.392 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.959 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:45.959 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.526 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.785 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.044 00:13:47.302 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.302 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.302 02:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.560 { 00:13:47.560 "cntlid": 59, 00:13:47.560 "qid": 0, 00:13:47.560 "state": "enabled", 00:13:47.560 "thread": "nvmf_tgt_poll_group_000", 00:13:47.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:47.560 "listen_address": { 00:13:47.560 "trtype": "TCP", 00:13:47.560 "adrfam": "IPv4", 00:13:47.560 "traddr": "10.0.0.3", 00:13:47.560 "trsvcid": "4420" 00:13:47.560 }, 00:13:47.560 "peer_address": { 00:13:47.560 "trtype": "TCP", 00:13:47.560 "adrfam": "IPv4", 00:13:47.560 "traddr": "10.0.0.1", 00:13:47.560 "trsvcid": "43372" 00:13:47.560 }, 00:13:47.560 "auth": { 00:13:47.560 "state": "completed", 00:13:47.560 "digest": "sha384", 00:13:47.560 "dhgroup": "ffdhe2048" 00:13:47.560 } 00:13:47.560 } 00:13:47.560 ]' 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.560 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.819 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:47.819 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:48.753 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.753 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.754 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.321 00:13:49.321 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.321 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.321 02:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.579 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.579 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.579 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.580 { 00:13:49.580 "cntlid": 61, 00:13:49.580 "qid": 0, 00:13:49.580 "state": "enabled", 00:13:49.580 "thread": "nvmf_tgt_poll_group_000", 00:13:49.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:49.580 "listen_address": { 00:13:49.580 "trtype": "TCP", 00:13:49.580 "adrfam": "IPv4", 00:13:49.580 "traddr": "10.0.0.3", 00:13:49.580 "trsvcid": "4420" 00:13:49.580 }, 00:13:49.580 "peer_address": { 00:13:49.580 "trtype": "TCP", 00:13:49.580 "adrfam": "IPv4", 00:13:49.580 "traddr": "10.0.0.1", 00:13:49.580 "trsvcid": "43418" 00:13:49.580 }, 00:13:49.580 "auth": { 00:13:49.580 "state": "completed", 00:13:49.580 "digest": "sha384", 00:13:49.580 "dhgroup": "ffdhe2048" 00:13:49.580 } 00:13:49.580 } 00:13:49.580 ]' 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.580 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.839 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:49.839 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:50.772 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.773 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.338 00:13:51.338 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.338 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.338 02:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.596 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.596 { 00:13:51.596 "cntlid": 63, 00:13:51.596 "qid": 0, 00:13:51.596 "state": "enabled", 00:13:51.597 "thread": "nvmf_tgt_poll_group_000", 00:13:51.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:51.597 "listen_address": { 00:13:51.597 "trtype": "TCP", 00:13:51.597 "adrfam": "IPv4", 00:13:51.597 "traddr": "10.0.0.3", 00:13:51.597 "trsvcid": "4420" 00:13:51.597 }, 00:13:51.597 "peer_address": { 00:13:51.597 "trtype": "TCP", 00:13:51.597 "adrfam": "IPv4", 00:13:51.597 "traddr": "10.0.0.1", 00:13:51.597 "trsvcid": "42084" 00:13:51.597 }, 00:13:51.597 "auth": { 00:13:51.597 "state": "completed", 00:13:51.597 "digest": "sha384", 00:13:51.597 "dhgroup": "ffdhe2048" 00:13:51.597 } 00:13:51.597 } 00:13:51.597 ]' 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.597 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.855 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:51.855 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:52.423 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.993 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.253 00:13:53.253 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.253 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.253 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.512 { 00:13:53.512 "cntlid": 65, 00:13:53.512 "qid": 0, 00:13:53.512 "state": "enabled", 00:13:53.512 "thread": "nvmf_tgt_poll_group_000", 00:13:53.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:53.512 "listen_address": { 00:13:53.512 "trtype": "TCP", 00:13:53.512 "adrfam": "IPv4", 00:13:53.512 "traddr": "10.0.0.3", 00:13:53.512 "trsvcid": "4420" 00:13:53.512 }, 00:13:53.512 "peer_address": { 00:13:53.512 "trtype": "TCP", 00:13:53.512 "adrfam": "IPv4", 00:13:53.512 "traddr": "10.0.0.1", 00:13:53.512 "trsvcid": "42112" 00:13:53.512 }, 00:13:53.512 "auth": { 00:13:53.512 "state": "completed", 00:13:53.512 "digest": "sha384", 00:13:53.512 "dhgroup": "ffdhe3072" 00:13:53.512 } 00:13:53.512 } 00:13:53.512 ]' 00:13:53.512 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.513 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.513 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.772 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:53.772 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.772 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.772 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.772 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.068 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:54.068 02:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:54.662 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.921 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.489 00:13:55.489 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.489 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.489 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.747 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.747 { 00:13:55.747 "cntlid": 67, 00:13:55.747 "qid": 0, 00:13:55.747 "state": "enabled", 00:13:55.747 "thread": "nvmf_tgt_poll_group_000", 00:13:55.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:55.747 "listen_address": { 00:13:55.747 "trtype": "TCP", 00:13:55.747 "adrfam": "IPv4", 00:13:55.747 "traddr": "10.0.0.3", 00:13:55.748 "trsvcid": "4420" 00:13:55.748 }, 00:13:55.748 "peer_address": { 00:13:55.748 "trtype": "TCP", 00:13:55.748 "adrfam": "IPv4", 00:13:55.748 "traddr": "10.0.0.1", 00:13:55.748 "trsvcid": "42148" 00:13:55.748 }, 00:13:55.748 "auth": { 00:13:55.748 "state": "completed", 00:13:55.748 "digest": "sha384", 00:13:55.748 "dhgroup": "ffdhe3072" 00:13:55.748 } 00:13:55.748 } 00:13:55.748 ]' 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.748 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.006 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:56.006 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:13:56.573 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:56.833 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.093 02:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.352 00:13:57.352 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.352 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.352 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.612 { 00:13:57.612 "cntlid": 69, 00:13:57.612 "qid": 0, 00:13:57.612 "state": "enabled", 00:13:57.612 "thread": "nvmf_tgt_poll_group_000", 00:13:57.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:57.612 "listen_address": { 00:13:57.612 "trtype": "TCP", 00:13:57.612 "adrfam": "IPv4", 00:13:57.612 "traddr": "10.0.0.3", 00:13:57.612 "trsvcid": "4420" 00:13:57.612 }, 00:13:57.612 "peer_address": { 00:13:57.612 "trtype": "TCP", 00:13:57.612 "adrfam": "IPv4", 00:13:57.612 "traddr": "10.0.0.1", 00:13:57.612 "trsvcid": "42186" 00:13:57.612 }, 00:13:57.612 "auth": { 00:13:57.612 "state": "completed", 00:13:57.612 "digest": "sha384", 00:13:57.612 "dhgroup": "ffdhe3072" 00:13:57.612 } 00:13:57.612 } 00:13:57.612 ]' 00:13:57.612 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.871 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.130 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:58.131 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:58.697 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.957 02:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.526 00:13:59.526 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.526 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.526 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.785 { 00:13:59.785 "cntlid": 71, 00:13:59.785 "qid": 0, 00:13:59.785 "state": "enabled", 00:13:59.785 "thread": "nvmf_tgt_poll_group_000", 00:13:59.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:13:59.785 "listen_address": { 00:13:59.785 "trtype": "TCP", 00:13:59.785 "adrfam": "IPv4", 00:13:59.785 "traddr": "10.0.0.3", 00:13:59.785 "trsvcid": "4420" 00:13:59.785 }, 00:13:59.785 "peer_address": { 00:13:59.785 "trtype": "TCP", 00:13:59.785 "adrfam": "IPv4", 00:13:59.785 "traddr": "10.0.0.1", 00:13:59.785 "trsvcid": "42222" 00:13:59.785 }, 00:13:59.785 "auth": { 00:13:59.785 "state": "completed", 00:13:59.785 "digest": "sha384", 00:13:59.785 "dhgroup": "ffdhe3072" 00:13:59.785 } 00:13:59.785 } 00:13:59.785 ]' 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.785 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.786 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.045 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:00.045 02:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:00.614 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.181 02:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.440 00:14:01.440 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.440 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.440 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.699 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.699 { 00:14:01.700 "cntlid": 73, 00:14:01.700 "qid": 0, 00:14:01.700 "state": "enabled", 00:14:01.700 "thread": "nvmf_tgt_poll_group_000", 00:14:01.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:01.700 "listen_address": { 00:14:01.700 "trtype": "TCP", 00:14:01.700 "adrfam": "IPv4", 00:14:01.700 "traddr": "10.0.0.3", 00:14:01.700 "trsvcid": "4420" 00:14:01.700 }, 00:14:01.700 "peer_address": { 00:14:01.700 "trtype": "TCP", 00:14:01.700 "adrfam": "IPv4", 00:14:01.700 "traddr": "10.0.0.1", 00:14:01.700 "trsvcid": "55718" 00:14:01.700 }, 00:14:01.700 "auth": { 00:14:01.700 "state": "completed", 00:14:01.700 "digest": "sha384", 00:14:01.700 "dhgroup": "ffdhe4096" 00:14:01.700 } 00:14:01.700 } 00:14:01.700 ]' 00:14:01.700 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.700 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.700 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.700 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:01.700 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.959 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.959 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.959 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.218 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:02.218 02:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:02.787 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.047 02:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.615 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.615 { 00:14:03.615 "cntlid": 75, 00:14:03.615 "qid": 0, 00:14:03.615 "state": "enabled", 00:14:03.615 "thread": "nvmf_tgt_poll_group_000", 00:14:03.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:03.615 "listen_address": { 00:14:03.615 "trtype": "TCP", 00:14:03.615 "adrfam": "IPv4", 00:14:03.615 "traddr": "10.0.0.3", 00:14:03.615 "trsvcid": "4420" 00:14:03.615 }, 00:14:03.615 "peer_address": { 00:14:03.615 "trtype": "TCP", 00:14:03.615 "adrfam": "IPv4", 00:14:03.615 "traddr": "10.0.0.1", 00:14:03.615 "trsvcid": "55740" 00:14:03.615 }, 00:14:03.615 "auth": { 00:14:03.615 "state": "completed", 00:14:03.615 "digest": "sha384", 00:14:03.615 "dhgroup": "ffdhe4096" 00:14:03.615 } 00:14:03.615 } 00:14:03.615 ]' 00:14:03.615 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.874 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.133 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:04.133 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.069 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.328 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.587 00:14:05.587 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.587 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.587 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.847 { 00:14:05.847 "cntlid": 77, 00:14:05.847 "qid": 0, 00:14:05.847 "state": "enabled", 00:14:05.847 "thread": "nvmf_tgt_poll_group_000", 00:14:05.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:05.847 "listen_address": { 00:14:05.847 "trtype": "TCP", 00:14:05.847 "adrfam": "IPv4", 00:14:05.847 "traddr": "10.0.0.3", 00:14:05.847 "trsvcid": "4420" 00:14:05.847 }, 00:14:05.847 "peer_address": { 00:14:05.847 "trtype": "TCP", 00:14:05.847 "adrfam": "IPv4", 00:14:05.847 "traddr": "10.0.0.1", 00:14:05.847 "trsvcid": "55748" 00:14:05.847 }, 00:14:05.847 "auth": { 00:14:05.847 "state": "completed", 00:14:05.847 "digest": "sha384", 00:14:05.847 "dhgroup": "ffdhe4096" 00:14:05.847 } 00:14:05.847 } 00:14:05.847 ]' 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.847 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.106 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.365 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:06.365 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:06.933 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:07.502 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.503 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.811 00:14:07.811 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.811 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.811 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.084 { 00:14:08.084 "cntlid": 79, 00:14:08.084 "qid": 0, 00:14:08.084 "state": "enabled", 00:14:08.084 "thread": "nvmf_tgt_poll_group_000", 00:14:08.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:08.084 "listen_address": { 00:14:08.084 "trtype": "TCP", 00:14:08.084 "adrfam": "IPv4", 00:14:08.084 "traddr": "10.0.0.3", 00:14:08.084 "trsvcid": "4420" 00:14:08.084 }, 00:14:08.084 "peer_address": { 00:14:08.084 "trtype": "TCP", 00:14:08.084 "adrfam": "IPv4", 00:14:08.084 "traddr": "10.0.0.1", 00:14:08.084 "trsvcid": "55772" 00:14:08.084 }, 00:14:08.084 "auth": { 00:14:08.084 "state": "completed", 00:14:08.084 "digest": "sha384", 00:14:08.084 "dhgroup": "ffdhe4096" 00:14:08.084 } 00:14:08.084 } 00:14:08.084 ]' 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.084 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.353 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:08.353 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:08.921 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.181 02:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.443 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.012 00:14:10.012 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.012 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.012 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.272 { 00:14:10.272 "cntlid": 81, 00:14:10.272 "qid": 0, 00:14:10.272 "state": "enabled", 00:14:10.272 "thread": "nvmf_tgt_poll_group_000", 00:14:10.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:10.272 "listen_address": { 00:14:10.272 "trtype": "TCP", 00:14:10.272 "adrfam": "IPv4", 00:14:10.272 "traddr": "10.0.0.3", 00:14:10.272 "trsvcid": "4420" 00:14:10.272 }, 00:14:10.272 "peer_address": { 00:14:10.272 "trtype": "TCP", 00:14:10.272 "adrfam": "IPv4", 00:14:10.272 "traddr": "10.0.0.1", 00:14:10.272 "trsvcid": "55802" 00:14:10.272 }, 00:14:10.272 "auth": { 00:14:10.272 "state": "completed", 00:14:10.272 "digest": "sha384", 00:14:10.272 "dhgroup": "ffdhe6144" 00:14:10.272 } 00:14:10.272 } 00:14:10.272 ]' 00:14:10.272 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.272 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.272 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.272 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:10.272 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.532 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.532 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.532 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.791 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:10.791 02:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.361 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.620 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.621 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.189 00:14:12.189 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.189 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.189 02:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.448 { 00:14:12.448 "cntlid": 83, 00:14:12.448 "qid": 0, 00:14:12.448 "state": "enabled", 00:14:12.448 "thread": "nvmf_tgt_poll_group_000", 00:14:12.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:12.448 "listen_address": { 00:14:12.448 "trtype": "TCP", 00:14:12.448 "adrfam": "IPv4", 00:14:12.448 "traddr": "10.0.0.3", 00:14:12.448 "trsvcid": "4420" 00:14:12.448 }, 00:14:12.448 "peer_address": { 00:14:12.448 "trtype": "TCP", 00:14:12.448 "adrfam": "IPv4", 00:14:12.448 "traddr": "10.0.0.1", 00:14:12.448 "trsvcid": "46718" 00:14:12.448 }, 00:14:12.448 "auth": { 00:14:12.448 "state": "completed", 00:14:12.448 "digest": "sha384", 00:14:12.448 "dhgroup": "ffdhe6144" 00:14:12.448 } 00:14:12.448 } 00:14:12.448 ]' 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.448 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.016 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:13.016 02:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.585 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.845 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.104 00:14:14.104 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.104 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.104 02:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.377 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.377 { 00:14:14.378 "cntlid": 85, 00:14:14.378 "qid": 0, 00:14:14.378 "state": "enabled", 00:14:14.378 "thread": "nvmf_tgt_poll_group_000", 00:14:14.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:14.378 "listen_address": { 00:14:14.378 "trtype": "TCP", 00:14:14.378 "adrfam": "IPv4", 00:14:14.378 "traddr": "10.0.0.3", 00:14:14.378 "trsvcid": "4420" 00:14:14.378 }, 00:14:14.378 "peer_address": { 00:14:14.378 "trtype": "TCP", 00:14:14.378 "adrfam": "IPv4", 00:14:14.378 "traddr": "10.0.0.1", 00:14:14.378 "trsvcid": "46750" 00:14:14.378 }, 00:14:14.378 "auth": { 00:14:14.378 "state": "completed", 00:14:14.378 "digest": "sha384", 00:14:14.378 "dhgroup": "ffdhe6144" 00:14:14.378 } 00:14:14.378 } 00:14:14.378 ]' 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.637 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.894 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:14.894 02:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:15.460 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.460 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:15.460 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.460 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.717 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.717 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.717 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.717 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.976 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.235 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.494 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.752 { 00:14:16.752 "cntlid": 87, 00:14:16.752 "qid": 0, 00:14:16.752 "state": "enabled", 00:14:16.752 "thread": "nvmf_tgt_poll_group_000", 00:14:16.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:16.752 "listen_address": { 00:14:16.752 "trtype": "TCP", 00:14:16.752 "adrfam": "IPv4", 00:14:16.752 "traddr": "10.0.0.3", 00:14:16.752 "trsvcid": "4420" 00:14:16.752 }, 00:14:16.752 "peer_address": { 00:14:16.752 "trtype": "TCP", 00:14:16.752 "adrfam": "IPv4", 00:14:16.752 "traddr": "10.0.0.1", 00:14:16.752 "trsvcid": "46790" 00:14:16.752 }, 00:14:16.752 "auth": { 00:14:16.752 "state": "completed", 00:14:16.752 "digest": "sha384", 00:14:16.752 "dhgroup": "ffdhe6144" 00:14:16.752 } 00:14:16.752 } 00:14:16.752 ]' 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.752 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.753 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.753 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.753 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.012 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:17.012 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.580 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.839 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.407 00:14:18.407 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.407 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.407 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.666 { 00:14:18.666 "cntlid": 89, 00:14:18.666 "qid": 0, 00:14:18.666 "state": "enabled", 00:14:18.666 "thread": "nvmf_tgt_poll_group_000", 00:14:18.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:18.666 "listen_address": { 00:14:18.666 "trtype": "TCP", 00:14:18.666 "adrfam": "IPv4", 00:14:18.666 "traddr": "10.0.0.3", 00:14:18.666 "trsvcid": "4420" 00:14:18.666 }, 00:14:18.666 "peer_address": { 00:14:18.666 "trtype": "TCP", 00:14:18.666 "adrfam": "IPv4", 00:14:18.666 "traddr": "10.0.0.1", 00:14:18.666 "trsvcid": "46810" 00:14:18.666 }, 00:14:18.666 "auth": { 00:14:18.666 "state": "completed", 00:14:18.666 "digest": "sha384", 00:14:18.666 "dhgroup": "ffdhe8192" 00:14:18.666 } 00:14:18.666 } 00:14:18.666 ]' 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.666 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.926 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.926 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.926 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.926 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.926 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.185 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:19.185 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:19.752 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.011 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.578 00:14:20.578 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.578 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.578 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.854 { 00:14:20.854 "cntlid": 91, 00:14:20.854 "qid": 0, 00:14:20.854 "state": "enabled", 00:14:20.854 "thread": "nvmf_tgt_poll_group_000", 00:14:20.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:20.854 "listen_address": { 00:14:20.854 "trtype": "TCP", 00:14:20.854 "adrfam": "IPv4", 00:14:20.854 "traddr": "10.0.0.3", 00:14:20.854 "trsvcid": "4420" 00:14:20.854 }, 00:14:20.854 "peer_address": { 00:14:20.854 "trtype": "TCP", 00:14:20.854 "adrfam": "IPv4", 00:14:20.854 "traddr": "10.0.0.1", 00:14:20.854 "trsvcid": "46834" 00:14:20.854 }, 00:14:20.854 "auth": { 00:14:20.854 "state": "completed", 00:14:20.854 "digest": "sha384", 00:14:20.854 "dhgroup": "ffdhe8192" 00:14:20.854 } 00:14:20.854 } 00:14:20.854 ]' 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.854 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.121 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:21.121 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.121 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.121 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.121 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.379 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:21.379 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.945 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.204 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.142 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.142 { 00:14:23.142 "cntlid": 93, 00:14:23.142 "qid": 0, 00:14:23.142 "state": "enabled", 00:14:23.142 "thread": "nvmf_tgt_poll_group_000", 00:14:23.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:23.142 "listen_address": { 00:14:23.142 "trtype": "TCP", 00:14:23.142 "adrfam": "IPv4", 00:14:23.142 "traddr": "10.0.0.3", 00:14:23.142 "trsvcid": "4420" 00:14:23.142 }, 00:14:23.142 "peer_address": { 00:14:23.142 "trtype": "TCP", 00:14:23.142 "adrfam": "IPv4", 00:14:23.142 "traddr": "10.0.0.1", 00:14:23.142 "trsvcid": "48644" 00:14:23.142 }, 00:14:23.142 "auth": { 00:14:23.142 "state": "completed", 00:14:23.142 "digest": "sha384", 00:14:23.142 "dhgroup": "ffdhe8192" 00:14:23.142 } 00:14:23.142 } 00:14:23.142 ]' 00:14:23.142 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.401 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.661 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:23.662 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:24.231 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.491 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.059 00:14:25.318 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.318 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.318 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.318 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.318 { 00:14:25.318 "cntlid": 95, 00:14:25.318 "qid": 0, 00:14:25.318 "state": "enabled", 00:14:25.318 "thread": "nvmf_tgt_poll_group_000", 00:14:25.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:25.318 "listen_address": { 00:14:25.318 "trtype": "TCP", 00:14:25.318 "adrfam": "IPv4", 00:14:25.318 "traddr": "10.0.0.3", 00:14:25.318 "trsvcid": "4420" 00:14:25.318 }, 00:14:25.318 "peer_address": { 00:14:25.318 "trtype": "TCP", 00:14:25.318 "adrfam": "IPv4", 00:14:25.318 "traddr": "10.0.0.1", 00:14:25.318 "trsvcid": "48668" 00:14:25.318 }, 00:14:25.318 "auth": { 00:14:25.318 "state": "completed", 00:14:25.318 "digest": "sha384", 00:14:25.318 "dhgroup": "ffdhe8192" 00:14:25.318 } 00:14:25.318 } 00:14:25.318 ]' 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.577 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.836 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:25.836 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:26.406 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.406 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:26.406 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.406 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:26.407 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.976 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.236 00:14:27.236 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.236 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.236 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.495 { 00:14:27.495 "cntlid": 97, 00:14:27.495 "qid": 0, 00:14:27.495 "state": "enabled", 00:14:27.495 "thread": "nvmf_tgt_poll_group_000", 00:14:27.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:27.495 "listen_address": { 00:14:27.495 "trtype": "TCP", 00:14:27.495 "adrfam": "IPv4", 00:14:27.495 "traddr": "10.0.0.3", 00:14:27.495 "trsvcid": "4420" 00:14:27.495 }, 00:14:27.495 "peer_address": { 00:14:27.495 "trtype": "TCP", 00:14:27.495 "adrfam": "IPv4", 00:14:27.495 "traddr": "10.0.0.1", 00:14:27.495 "trsvcid": "48696" 00:14:27.495 }, 00:14:27.495 "auth": { 00:14:27.495 "state": "completed", 00:14:27.495 "digest": "sha512", 00:14:27.495 "dhgroup": "null" 00:14:27.495 } 00:14:27.495 } 00:14:27.495 ]' 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.495 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.065 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:28.065 02:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.633 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.892 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.459 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.459 { 00:14:29.459 "cntlid": 99, 00:14:29.459 "qid": 0, 00:14:29.459 "state": "enabled", 00:14:29.459 "thread": "nvmf_tgt_poll_group_000", 00:14:29.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:29.459 "listen_address": { 00:14:29.459 "trtype": "TCP", 00:14:29.459 "adrfam": "IPv4", 00:14:29.459 "traddr": "10.0.0.3", 00:14:29.459 "trsvcid": "4420" 00:14:29.459 }, 00:14:29.459 "peer_address": { 00:14:29.459 "trtype": "TCP", 00:14:29.459 "adrfam": "IPv4", 00:14:29.459 "traddr": "10.0.0.1", 00:14:29.459 "trsvcid": "48730" 00:14:29.459 }, 00:14:29.459 "auth": { 00:14:29.459 "state": "completed", 00:14:29.459 "digest": "sha512", 00:14:29.459 "dhgroup": "null" 00:14:29.459 } 00:14:29.459 } 00:14:29.459 ]' 00:14:29.459 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.718 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.977 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:29.977 02:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.914 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.173 02:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.432 00:14:31.432 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.432 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.432 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.691 { 00:14:31.691 "cntlid": 101, 00:14:31.691 "qid": 0, 00:14:31.691 "state": "enabled", 00:14:31.691 "thread": "nvmf_tgt_poll_group_000", 00:14:31.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:31.691 "listen_address": { 00:14:31.691 "trtype": "TCP", 00:14:31.691 "adrfam": "IPv4", 00:14:31.691 "traddr": "10.0.0.3", 00:14:31.691 "trsvcid": "4420" 00:14:31.691 }, 00:14:31.691 "peer_address": { 00:14:31.691 "trtype": "TCP", 00:14:31.691 "adrfam": "IPv4", 00:14:31.691 "traddr": "10.0.0.1", 00:14:31.691 "trsvcid": "45730" 00:14:31.691 }, 00:14:31.691 "auth": { 00:14:31.691 "state": "completed", 00:14:31.691 "digest": "sha512", 00:14:31.691 "dhgroup": "null" 00:14:31.691 } 00:14:31.691 } 00:14:31.691 ]' 00:14:31.691 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.950 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.951 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.209 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:32.209 02:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.146 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.147 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:33.147 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.405 02:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.405 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.405 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.405 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.405 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.664 00:14:33.664 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.664 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.664 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.923 { 00:14:33.923 "cntlid": 103, 00:14:33.923 "qid": 0, 00:14:33.923 "state": "enabled", 00:14:33.923 "thread": "nvmf_tgt_poll_group_000", 00:14:33.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:33.923 "listen_address": { 00:14:33.923 "trtype": "TCP", 00:14:33.923 "adrfam": "IPv4", 00:14:33.923 "traddr": "10.0.0.3", 00:14:33.923 "trsvcid": "4420" 00:14:33.923 }, 00:14:33.923 "peer_address": { 00:14:33.923 "trtype": "TCP", 00:14:33.923 "adrfam": "IPv4", 00:14:33.923 "traddr": "10.0.0.1", 00:14:33.923 "trsvcid": "45762" 00:14:33.923 }, 00:14:33.923 "auth": { 00:14:33.923 "state": "completed", 00:14:33.923 "digest": "sha512", 00:14:33.923 "dhgroup": "null" 00:14:33.923 } 00:14:33.923 } 00:14:33.923 ]' 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.923 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.182 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.182 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.182 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.182 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.182 02:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.441 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:34.441 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.026 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.285 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.854 00:14:35.854 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.854 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.854 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.112 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.112 { 00:14:36.112 "cntlid": 105, 00:14:36.112 "qid": 0, 00:14:36.112 "state": "enabled", 00:14:36.112 "thread": "nvmf_tgt_poll_group_000", 00:14:36.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:36.112 "listen_address": { 00:14:36.112 "trtype": "TCP", 00:14:36.112 "adrfam": "IPv4", 00:14:36.112 "traddr": "10.0.0.3", 00:14:36.112 "trsvcid": "4420" 00:14:36.112 }, 00:14:36.112 "peer_address": { 00:14:36.112 "trtype": "TCP", 00:14:36.112 "adrfam": "IPv4", 00:14:36.112 "traddr": "10.0.0.1", 00:14:36.112 "trsvcid": "45794" 00:14:36.112 }, 00:14:36.112 "auth": { 00:14:36.112 "state": "completed", 00:14:36.112 "digest": "sha512", 00:14:36.113 "dhgroup": "ffdhe2048" 00:14:36.113 } 00:14:36.113 } 00:14:36.113 ]' 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.113 02:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.679 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:36.679 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.248 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.508 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.768 00:14:37.768 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.768 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.768 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.028 { 00:14:38.028 "cntlid": 107, 00:14:38.028 "qid": 0, 00:14:38.028 "state": "enabled", 00:14:38.028 "thread": "nvmf_tgt_poll_group_000", 00:14:38.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:38.028 "listen_address": { 00:14:38.028 "trtype": "TCP", 00:14:38.028 "adrfam": "IPv4", 00:14:38.028 "traddr": "10.0.0.3", 00:14:38.028 "trsvcid": "4420" 00:14:38.028 }, 00:14:38.028 "peer_address": { 00:14:38.028 "trtype": "TCP", 00:14:38.028 "adrfam": "IPv4", 00:14:38.028 "traddr": "10.0.0.1", 00:14:38.028 "trsvcid": "45838" 00:14:38.028 }, 00:14:38.028 "auth": { 00:14:38.028 "state": "completed", 00:14:38.028 "digest": "sha512", 00:14:38.028 "dhgroup": "ffdhe2048" 00:14:38.028 } 00:14:38.028 } 00:14:38.028 ]' 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.028 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.287 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.287 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.287 02:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.287 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:38.546 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.116 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.376 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.635 00:14:39.895 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.895 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.895 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.155 { 00:14:40.155 "cntlid": 109, 00:14:40.155 "qid": 0, 00:14:40.155 "state": "enabled", 00:14:40.155 "thread": "nvmf_tgt_poll_group_000", 00:14:40.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:40.155 "listen_address": { 00:14:40.155 "trtype": "TCP", 00:14:40.155 "adrfam": "IPv4", 00:14:40.155 "traddr": "10.0.0.3", 00:14:40.155 "trsvcid": "4420" 00:14:40.155 }, 00:14:40.155 "peer_address": { 00:14:40.155 "trtype": "TCP", 00:14:40.155 "adrfam": "IPv4", 00:14:40.155 "traddr": "10.0.0.1", 00:14:40.155 "trsvcid": "45864" 00:14:40.155 }, 00:14:40.155 "auth": { 00:14:40.155 "state": "completed", 00:14:40.155 "digest": "sha512", 00:14:40.155 "dhgroup": "ffdhe2048" 00:14:40.155 } 00:14:40.155 } 00:14:40.155 ]' 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.155 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.723 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:40.723 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.293 02:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.552 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.812 00:14:41.812 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.812 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.812 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.071 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.071 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.071 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.071 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.071 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.072 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.072 { 00:14:42.072 "cntlid": 111, 00:14:42.072 "qid": 0, 00:14:42.072 "state": "enabled", 00:14:42.072 "thread": "nvmf_tgt_poll_group_000", 00:14:42.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:42.072 "listen_address": { 00:14:42.072 "trtype": "TCP", 00:14:42.072 "adrfam": "IPv4", 00:14:42.072 "traddr": "10.0.0.3", 00:14:42.072 "trsvcid": "4420" 00:14:42.072 }, 00:14:42.072 "peer_address": { 00:14:42.072 "trtype": "TCP", 00:14:42.072 "adrfam": "IPv4", 00:14:42.072 "traddr": "10.0.0.1", 00:14:42.072 "trsvcid": "37416" 00:14:42.072 }, 00:14:42.072 "auth": { 00:14:42.072 "state": "completed", 00:14:42.072 "digest": "sha512", 00:14:42.072 "dhgroup": "ffdhe2048" 00:14:42.072 } 00:14:42.072 } 00:14:42.072 ]' 00:14:42.072 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.072 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.072 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.331 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.331 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.331 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.331 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.331 02:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.590 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:42.590 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.158 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.159 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.159 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.159 02:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.417 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.418 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.986 00:14:43.986 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.986 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.986 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.246 { 00:14:44.246 "cntlid": 113, 00:14:44.246 "qid": 0, 00:14:44.246 "state": "enabled", 00:14:44.246 "thread": "nvmf_tgt_poll_group_000", 00:14:44.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:44.246 "listen_address": { 00:14:44.246 "trtype": "TCP", 00:14:44.246 "adrfam": "IPv4", 00:14:44.246 "traddr": "10.0.0.3", 00:14:44.246 "trsvcid": "4420" 00:14:44.246 }, 00:14:44.246 "peer_address": { 00:14:44.246 "trtype": "TCP", 00:14:44.246 "adrfam": "IPv4", 00:14:44.246 "traddr": "10.0.0.1", 00:14:44.246 "trsvcid": "37434" 00:14:44.246 }, 00:14:44.246 "auth": { 00:14:44.246 "state": "completed", 00:14:44.246 "digest": "sha512", 00:14:44.246 "dhgroup": "ffdhe3072" 00:14:44.246 } 00:14:44.246 } 00:14:44.246 ]' 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.246 02:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.246 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.246 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.246 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.246 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.246 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.504 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:44.504 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.438 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.438 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.006 00:14:46.006 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.006 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.006 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.265 { 00:14:46.265 "cntlid": 115, 00:14:46.265 "qid": 0, 00:14:46.265 "state": "enabled", 00:14:46.265 "thread": "nvmf_tgt_poll_group_000", 00:14:46.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:46.265 "listen_address": { 00:14:46.265 "trtype": "TCP", 00:14:46.265 "adrfam": "IPv4", 00:14:46.265 "traddr": "10.0.0.3", 00:14:46.265 "trsvcid": "4420" 00:14:46.265 }, 00:14:46.265 "peer_address": { 00:14:46.265 "trtype": "TCP", 00:14:46.265 "adrfam": "IPv4", 00:14:46.265 "traddr": "10.0.0.1", 00:14:46.265 "trsvcid": "37462" 00:14:46.265 }, 00:14:46.265 "auth": { 00:14:46.265 "state": "completed", 00:14:46.265 "digest": "sha512", 00:14:46.265 "dhgroup": "ffdhe3072" 00:14:46.265 } 00:14:46.265 } 00:14:46.265 ]' 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.265 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.265 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.265 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.265 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.265 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.265 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.833 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:46.833 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:47.430 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.430 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.688 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.948 00:14:47.948 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.948 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.948 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.207 { 00:14:48.207 "cntlid": 117, 00:14:48.207 "qid": 0, 00:14:48.207 "state": "enabled", 00:14:48.207 "thread": "nvmf_tgt_poll_group_000", 00:14:48.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:48.207 "listen_address": { 00:14:48.207 "trtype": "TCP", 00:14:48.207 "adrfam": "IPv4", 00:14:48.207 "traddr": "10.0.0.3", 00:14:48.207 "trsvcid": "4420" 00:14:48.207 }, 00:14:48.207 "peer_address": { 00:14:48.207 "trtype": "TCP", 00:14:48.207 "adrfam": "IPv4", 00:14:48.207 "traddr": "10.0.0.1", 00:14:48.207 "trsvcid": "37490" 00:14:48.207 }, 00:14:48.207 "auth": { 00:14:48.207 "state": "completed", 00:14:48.207 "digest": "sha512", 00:14:48.207 "dhgroup": "ffdhe3072" 00:14:48.207 } 00:14:48.207 } 00:14:48.207 ]' 00:14:48.207 02:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.207 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.207 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.466 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.466 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.466 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.466 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.466 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.724 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:48.724 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:49.292 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.292 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:49.292 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.292 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.550 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.550 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.550 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.550 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.810 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.069 00:14:50.069 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.069 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.069 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.328 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.328 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.328 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.328 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.328 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.328 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.328 { 00:14:50.328 "cntlid": 119, 00:14:50.328 "qid": 0, 00:14:50.328 "state": "enabled", 00:14:50.328 "thread": "nvmf_tgt_poll_group_000", 00:14:50.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:50.328 "listen_address": { 00:14:50.328 "trtype": "TCP", 00:14:50.328 "adrfam": "IPv4", 00:14:50.329 "traddr": "10.0.0.3", 00:14:50.329 "trsvcid": "4420" 00:14:50.329 }, 00:14:50.329 "peer_address": { 00:14:50.329 "trtype": "TCP", 00:14:50.329 "adrfam": "IPv4", 00:14:50.329 "traddr": "10.0.0.1", 00:14:50.329 "trsvcid": "37514" 00:14:50.329 }, 00:14:50.329 "auth": { 00:14:50.329 "state": "completed", 00:14:50.329 "digest": "sha512", 00:14:50.329 "dhgroup": "ffdhe3072" 00:14:50.329 } 00:14:50.329 } 00:14:50.329 ]' 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.329 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.896 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:50.896 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.464 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.723 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.982 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.982 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.982 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.982 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.242 00:14:52.242 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.243 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.243 02:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.502 { 00:14:52.502 "cntlid": 121, 00:14:52.502 "qid": 0, 00:14:52.502 "state": "enabled", 00:14:52.502 "thread": "nvmf_tgt_poll_group_000", 00:14:52.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:52.502 "listen_address": { 00:14:52.502 "trtype": "TCP", 00:14:52.502 "adrfam": "IPv4", 00:14:52.502 "traddr": "10.0.0.3", 00:14:52.502 "trsvcid": "4420" 00:14:52.502 }, 00:14:52.502 "peer_address": { 00:14:52.502 "trtype": "TCP", 00:14:52.502 "adrfam": "IPv4", 00:14:52.502 "traddr": "10.0.0.1", 00:14:52.502 "trsvcid": "56910" 00:14:52.502 }, 00:14:52.502 "auth": { 00:14:52.502 "state": "completed", 00:14:52.502 "digest": "sha512", 00:14:52.502 "dhgroup": "ffdhe4096" 00:14:52.502 } 00:14:52.502 } 00:14:52.502 ]' 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.502 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.762 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.762 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.762 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.762 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.762 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.021 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:53.021 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.655 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.913 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.171 00:14:54.430 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.430 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.430 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.688 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.688 { 00:14:54.689 "cntlid": 123, 00:14:54.689 "qid": 0, 00:14:54.689 "state": "enabled", 00:14:54.689 "thread": "nvmf_tgt_poll_group_000", 00:14:54.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:54.689 "listen_address": { 00:14:54.689 "trtype": "TCP", 00:14:54.689 "adrfam": "IPv4", 00:14:54.689 "traddr": "10.0.0.3", 00:14:54.689 "trsvcid": "4420" 00:14:54.689 }, 00:14:54.689 "peer_address": { 00:14:54.689 "trtype": "TCP", 00:14:54.689 "adrfam": "IPv4", 00:14:54.689 "traddr": "10.0.0.1", 00:14:54.689 "trsvcid": "56934" 00:14:54.689 }, 00:14:54.689 "auth": { 00:14:54.689 "state": "completed", 00:14:54.689 "digest": "sha512", 00:14:54.689 "dhgroup": "ffdhe4096" 00:14:54.689 } 00:14:54.689 } 00:14:54.689 ]' 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.689 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.946 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:54.946 02:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.881 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.139 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.397 00:14:56.397 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.397 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.397 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.656 { 00:14:56.656 "cntlid": 125, 00:14:56.656 "qid": 0, 00:14:56.656 "state": "enabled", 00:14:56.656 "thread": "nvmf_tgt_poll_group_000", 00:14:56.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:56.656 "listen_address": { 00:14:56.656 "trtype": "TCP", 00:14:56.656 "adrfam": "IPv4", 00:14:56.656 "traddr": "10.0.0.3", 00:14:56.656 "trsvcid": "4420" 00:14:56.656 }, 00:14:56.656 "peer_address": { 00:14:56.656 "trtype": "TCP", 00:14:56.656 "adrfam": "IPv4", 00:14:56.656 "traddr": "10.0.0.1", 00:14:56.656 "trsvcid": "56948" 00:14:56.656 }, 00:14:56.656 "auth": { 00:14:56.656 "state": "completed", 00:14:56.656 "digest": "sha512", 00:14:56.656 "dhgroup": "ffdhe4096" 00:14:56.656 } 00:14:56.656 } 00:14:56.656 ]' 00:14:56.656 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.915 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.174 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:57.174 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.743 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.003 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.571 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.571 { 00:14:58.571 "cntlid": 127, 00:14:58.571 "qid": 0, 00:14:58.571 "state": "enabled", 00:14:58.571 "thread": "nvmf_tgt_poll_group_000", 00:14:58.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:14:58.571 "listen_address": { 00:14:58.571 "trtype": "TCP", 00:14:58.571 "adrfam": "IPv4", 00:14:58.571 "traddr": "10.0.0.3", 00:14:58.571 "trsvcid": "4420" 00:14:58.571 }, 00:14:58.571 "peer_address": { 00:14:58.571 "trtype": "TCP", 00:14:58.571 "adrfam": "IPv4", 00:14:58.571 "traddr": "10.0.0.1", 00:14:58.571 "trsvcid": "56958" 00:14:58.571 }, 00:14:58.571 "auth": { 00:14:58.571 "state": "completed", 00:14:58.571 "digest": "sha512", 00:14:58.571 "dhgroup": "ffdhe4096" 00:14:58.571 } 00:14:58.571 } 00:14:58.571 ]' 00:14:58.571 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.830 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.119 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:14:59.119 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.072 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.642 00:15:00.642 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.642 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.642 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.902 { 00:15:00.902 "cntlid": 129, 00:15:00.902 "qid": 0, 00:15:00.902 "state": "enabled", 00:15:00.902 "thread": "nvmf_tgt_poll_group_000", 00:15:00.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:00.902 "listen_address": { 00:15:00.902 "trtype": "TCP", 00:15:00.902 "adrfam": "IPv4", 00:15:00.902 "traddr": "10.0.0.3", 00:15:00.902 "trsvcid": "4420" 00:15:00.902 }, 00:15:00.902 "peer_address": { 00:15:00.902 "trtype": "TCP", 00:15:00.902 "adrfam": "IPv4", 00:15:00.902 "traddr": "10.0.0.1", 00:15:00.902 "trsvcid": "55582" 00:15:00.902 }, 00:15:00.902 "auth": { 00:15:00.902 "state": "completed", 00:15:00.902 "digest": "sha512", 00:15:00.902 "dhgroup": "ffdhe6144" 00:15:00.902 } 00:15:00.902 } 00:15:00.902 ]' 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.902 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.162 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.162 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.162 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.162 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.162 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.422 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:01.422 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.991 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.251 02:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.821 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.821 { 00:15:02.821 "cntlid": 131, 00:15:02.821 "qid": 0, 00:15:02.821 "state": "enabled", 00:15:02.821 "thread": "nvmf_tgt_poll_group_000", 00:15:02.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:02.821 "listen_address": { 00:15:02.821 "trtype": "TCP", 00:15:02.821 "adrfam": "IPv4", 00:15:02.821 "traddr": "10.0.0.3", 00:15:02.821 "trsvcid": "4420" 00:15:02.821 }, 00:15:02.821 "peer_address": { 00:15:02.821 "trtype": "TCP", 00:15:02.821 "adrfam": "IPv4", 00:15:02.821 "traddr": "10.0.0.1", 00:15:02.821 "trsvcid": "55606" 00:15:02.821 }, 00:15:02.821 "auth": { 00:15:02.821 "state": "completed", 00:15:02.821 "digest": "sha512", 00:15:02.821 "dhgroup": "ffdhe6144" 00:15:02.821 } 00:15:02.821 } 00:15:02.821 ]' 00:15:02.821 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.080 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.339 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:15:03.339 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.907 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.167 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.752 00:15:04.752 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.752 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.752 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.010 { 00:15:05.010 "cntlid": 133, 00:15:05.010 "qid": 0, 00:15:05.010 "state": "enabled", 00:15:05.010 "thread": "nvmf_tgt_poll_group_000", 00:15:05.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:05.010 "listen_address": { 00:15:05.010 "trtype": "TCP", 00:15:05.010 "adrfam": "IPv4", 00:15:05.010 "traddr": "10.0.0.3", 00:15:05.010 "trsvcid": "4420" 00:15:05.010 }, 00:15:05.010 "peer_address": { 00:15:05.010 "trtype": "TCP", 00:15:05.010 "adrfam": "IPv4", 00:15:05.010 "traddr": "10.0.0.1", 00:15:05.010 "trsvcid": "55636" 00:15:05.010 }, 00:15:05.010 "auth": { 00:15:05.010 "state": "completed", 00:15:05.010 "digest": "sha512", 00:15:05.010 "dhgroup": "ffdhe6144" 00:15:05.010 } 00:15:05.010 } 00:15:05.010 ]' 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:05.010 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.268 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.268 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.268 02:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.525 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:15:05.525 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.091 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.657 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.914 00:15:06.914 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.914 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.914 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.172 { 00:15:07.172 "cntlid": 135, 00:15:07.172 "qid": 0, 00:15:07.172 "state": "enabled", 00:15:07.172 "thread": "nvmf_tgt_poll_group_000", 00:15:07.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:07.172 "listen_address": { 00:15:07.172 "trtype": "TCP", 00:15:07.172 "adrfam": "IPv4", 00:15:07.172 "traddr": "10.0.0.3", 00:15:07.172 "trsvcid": "4420" 00:15:07.172 }, 00:15:07.172 "peer_address": { 00:15:07.172 "trtype": "TCP", 00:15:07.172 "adrfam": "IPv4", 00:15:07.172 "traddr": "10.0.0.1", 00:15:07.172 "trsvcid": "55662" 00:15:07.172 }, 00:15:07.172 "auth": { 00:15:07.172 "state": "completed", 00:15:07.172 "digest": "sha512", 00:15:07.172 "dhgroup": "ffdhe6144" 00:15:07.172 } 00:15:07.172 } 00:15:07.172 ]' 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.172 02:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.430 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.430 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.430 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.430 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.430 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.687 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:07.688 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.254 02:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.513 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.514 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.080 00:15:09.080 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.080 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.080 02:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.339 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.339 { 00:15:09.340 "cntlid": 137, 00:15:09.340 "qid": 0, 00:15:09.340 "state": "enabled", 00:15:09.340 "thread": "nvmf_tgt_poll_group_000", 00:15:09.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:09.340 "listen_address": { 00:15:09.340 "trtype": "TCP", 00:15:09.340 "adrfam": "IPv4", 00:15:09.340 "traddr": "10.0.0.3", 00:15:09.340 "trsvcid": "4420" 00:15:09.340 }, 00:15:09.340 "peer_address": { 00:15:09.340 "trtype": "TCP", 00:15:09.340 "adrfam": "IPv4", 00:15:09.340 "traddr": "10.0.0.1", 00:15:09.340 "trsvcid": "55704" 00:15:09.340 }, 00:15:09.340 "auth": { 00:15:09.340 "state": "completed", 00:15:09.340 "digest": "sha512", 00:15:09.340 "dhgroup": "ffdhe8192" 00:15:09.340 } 00:15:09.340 } 00:15:09.340 ]' 00:15:09.340 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.340 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.340 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.600 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.600 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.600 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.600 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.600 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.860 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:09.860 02:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.428 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.995 02:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.563 00:15:11.563 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.563 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.563 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.823 { 00:15:11.823 "cntlid": 139, 00:15:11.823 "qid": 0, 00:15:11.823 "state": "enabled", 00:15:11.823 "thread": "nvmf_tgt_poll_group_000", 00:15:11.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:11.823 "listen_address": { 00:15:11.823 "trtype": "TCP", 00:15:11.823 "adrfam": "IPv4", 00:15:11.823 "traddr": "10.0.0.3", 00:15:11.823 "trsvcid": "4420" 00:15:11.823 }, 00:15:11.823 "peer_address": { 00:15:11.823 "trtype": "TCP", 00:15:11.823 "adrfam": "IPv4", 00:15:11.823 "traddr": "10.0.0.1", 00:15:11.823 "trsvcid": "59126" 00:15:11.823 }, 00:15:11.823 "auth": { 00:15:11.823 "state": "completed", 00:15:11.823 "digest": "sha512", 00:15:11.823 "dhgroup": "ffdhe8192" 00:15:11.823 } 00:15:11.823 } 00:15:11.823 ]' 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.823 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.081 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.081 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.081 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.081 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.081 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.340 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:15:12.340 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: --dhchap-ctrl-secret DHHC-1:02:MDE2ODY4ZmJhNDkyNWY0YzVkOGQ1NzNmMjIxZGUyMjU0ZGFjZjJhZjU4YjI5MTJjKbQDkw==: 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.907 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.166 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.734 00:15:13.992 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.992 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.992 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.251 { 00:15:14.251 "cntlid": 141, 00:15:14.251 "qid": 0, 00:15:14.251 "state": "enabled", 00:15:14.251 "thread": "nvmf_tgt_poll_group_000", 00:15:14.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:14.251 "listen_address": { 00:15:14.251 "trtype": "TCP", 00:15:14.251 "adrfam": "IPv4", 00:15:14.251 "traddr": "10.0.0.3", 00:15:14.251 "trsvcid": "4420" 00:15:14.251 }, 00:15:14.251 "peer_address": { 00:15:14.251 "trtype": "TCP", 00:15:14.251 "adrfam": "IPv4", 00:15:14.251 "traddr": "10.0.0.1", 00:15:14.251 "trsvcid": "59156" 00:15:14.251 }, 00:15:14.251 "auth": { 00:15:14.251 "state": "completed", 00:15:14.251 "digest": "sha512", 00:15:14.251 "dhgroup": "ffdhe8192" 00:15:14.251 } 00:15:14.251 } 00:15:14.251 ]' 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.251 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.251 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.251 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.251 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.509 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:15:14.509 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:01:NDZlZDYwYzE2MzZhN2IzNzQ5ODIwOWE5YmFjMGIzOTZIZGdv: 00:15:15.443 02:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.443 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.701 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.267 00:15:16.267 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.267 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.267 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.525 { 00:15:16.525 "cntlid": 143, 00:15:16.525 "qid": 0, 00:15:16.525 "state": "enabled", 00:15:16.525 "thread": "nvmf_tgt_poll_group_000", 00:15:16.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:16.525 "listen_address": { 00:15:16.525 "trtype": "TCP", 00:15:16.525 "adrfam": "IPv4", 00:15:16.525 "traddr": "10.0.0.3", 00:15:16.525 "trsvcid": "4420" 00:15:16.525 }, 00:15:16.525 "peer_address": { 00:15:16.525 "trtype": "TCP", 00:15:16.525 "adrfam": "IPv4", 00:15:16.525 "traddr": "10.0.0.1", 00:15:16.525 "trsvcid": "59164" 00:15:16.525 }, 00:15:16.525 "auth": { 00:15:16.525 "state": "completed", 00:15:16.525 "digest": "sha512", 00:15:16.525 "dhgroup": "ffdhe8192" 00:15:16.525 } 00:15:16.525 } 00:15:16.525 ]' 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.525 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.784 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:16.784 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.720 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.721 02:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.289 00:15:18.548 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.548 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.548 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.807 { 00:15:18.807 "cntlid": 145, 00:15:18.807 "qid": 0, 00:15:18.807 "state": "enabled", 00:15:18.807 "thread": "nvmf_tgt_poll_group_000", 00:15:18.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:18.807 "listen_address": { 00:15:18.807 "trtype": "TCP", 00:15:18.807 "adrfam": "IPv4", 00:15:18.807 "traddr": "10.0.0.3", 00:15:18.807 "trsvcid": "4420" 00:15:18.807 }, 00:15:18.807 "peer_address": { 00:15:18.807 "trtype": "TCP", 00:15:18.807 "adrfam": "IPv4", 00:15:18.807 "traddr": "10.0.0.1", 00:15:18.807 "trsvcid": "59198" 00:15:18.807 }, 00:15:18.807 "auth": { 00:15:18.807 "state": "completed", 00:15:18.807 "digest": "sha512", 00:15:18.807 "dhgroup": "ffdhe8192" 00:15:18.807 } 00:15:18.807 } 00:15:18.807 ]' 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.807 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.066 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:19.066 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:00:ZmQxZjc0ZmM4M2E4MzQwOTQ2OWQ5MTFlNWY3MmU1ZDMzMzY3ZWFhNzk3YjRjNTg1DvhJKg==: --dhchap-ctrl-secret DHHC-1:03:YzAzODdmMThlZmMzZmMyMDVjNDc5N2EyMmVlMGEwMWIwYjc0NDhlMGY4NGNkN2VjZjdmNzlhOGJjZmUwZDQ4ZJqDT90=: 00:15:20.002 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.002 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:20.002 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.002 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.002 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:20.003 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:20.571 request: 00:15:20.571 { 00:15:20.571 "name": "nvme0", 00:15:20.571 "trtype": "tcp", 00:15:20.571 "traddr": "10.0.0.3", 00:15:20.571 "adrfam": "ipv4", 00:15:20.571 "trsvcid": "4420", 00:15:20.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:20.571 "prchk_reftag": false, 00:15:20.571 "prchk_guard": false, 00:15:20.571 "hdgst": false, 00:15:20.571 "ddgst": false, 00:15:20.571 "dhchap_key": "key2", 00:15:20.571 "allow_unrecognized_csi": false, 00:15:20.571 "method": "bdev_nvme_attach_controller", 00:15:20.571 "req_id": 1 00:15:20.571 } 00:15:20.571 Got JSON-RPC error response 00:15:20.571 response: 00:15:20.571 { 00:15:20.571 "code": -5, 00:15:20.571 "message": "Input/output error" 00:15:20.571 } 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.831 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:21.399 request: 00:15:21.399 { 00:15:21.399 "name": "nvme0", 00:15:21.399 "trtype": "tcp", 00:15:21.399 "traddr": "10.0.0.3", 00:15:21.399 "adrfam": "ipv4", 00:15:21.399 "trsvcid": "4420", 00:15:21.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:21.399 "prchk_reftag": false, 00:15:21.399 "prchk_guard": false, 00:15:21.399 "hdgst": false, 00:15:21.399 "ddgst": false, 00:15:21.399 "dhchap_key": "key1", 00:15:21.399 "dhchap_ctrlr_key": "ckey2", 00:15:21.399 "allow_unrecognized_csi": false, 00:15:21.399 "method": "bdev_nvme_attach_controller", 00:15:21.399 "req_id": 1 00:15:21.399 } 00:15:21.399 Got JSON-RPC error response 00:15:21.399 response: 00:15:21.399 { 00:15:21.399 "code": -5, 00:15:21.399 "message": "Input/output error" 00:15:21.399 } 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.400 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.336 request: 00:15:22.336 { 00:15:22.336 "name": "nvme0", 00:15:22.336 "trtype": "tcp", 00:15:22.336 "traddr": "10.0.0.3", 00:15:22.336 "adrfam": "ipv4", 00:15:22.336 "trsvcid": "4420", 00:15:22.336 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:22.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:22.336 "prchk_reftag": false, 00:15:22.337 "prchk_guard": false, 00:15:22.337 "hdgst": false, 00:15:22.337 "ddgst": false, 00:15:22.337 "dhchap_key": "key1", 00:15:22.337 "dhchap_ctrlr_key": "ckey1", 00:15:22.337 "allow_unrecognized_csi": false, 00:15:22.337 "method": "bdev_nvme_attach_controller", 00:15:22.337 "req_id": 1 00:15:22.337 } 00:15:22.337 Got JSON-RPC error response 00:15:22.337 response: 00:15:22.337 { 00:15:22.337 "code": -5, 00:15:22.337 "message": "Input/output error" 00:15:22.337 } 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 69827 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69827 ']' 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69827 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.337 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69827 00:15:22.337 killing process with pid 69827 00:15:22.337 02:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.337 02:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.337 02:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69827' 00:15:22.337 02:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69827 00:15:22.337 02:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69827 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=72876 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 72876 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72876 ']' 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.282 02:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:24.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 72876 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72876 ']' 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.690 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 null0 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nvN 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.yd2 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yd2 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.l6h 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.QFo ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QFo 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4ij 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1vm ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1vm 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EUe 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.256 02:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.192 nvme0n1 00:15:26.192 02:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.192 02:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.192 02:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.451 { 00:15:26.451 "cntlid": 1, 00:15:26.451 "qid": 0, 00:15:26.451 "state": "enabled", 00:15:26.451 "thread": "nvmf_tgt_poll_group_000", 00:15:26.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:26.451 "listen_address": { 00:15:26.451 "trtype": "TCP", 00:15:26.451 "adrfam": "IPv4", 00:15:26.451 "traddr": "10.0.0.3", 00:15:26.451 "trsvcid": "4420" 00:15:26.451 }, 00:15:26.451 "peer_address": { 00:15:26.451 "trtype": "TCP", 00:15:26.451 "adrfam": "IPv4", 00:15:26.451 "traddr": "10.0.0.1", 00:15:26.451 "trsvcid": "49538" 00:15:26.451 }, 00:15:26.451 "auth": { 00:15:26.451 "state": "completed", 00:15:26.451 "digest": "sha512", 00:15:26.451 "dhgroup": "ffdhe8192" 00:15:26.451 } 00:15:26.451 } 00:15:26.451 ]' 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.451 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.710 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.710 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.710 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.710 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.710 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.969 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:26.969 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key3 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.906 02:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.165 request: 00:15:28.165 { 00:15:28.165 "name": "nvme0", 00:15:28.165 "trtype": "tcp", 00:15:28.165 "traddr": "10.0.0.3", 00:15:28.165 "adrfam": "ipv4", 00:15:28.165 "trsvcid": "4420", 00:15:28.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:28.165 "prchk_reftag": false, 00:15:28.165 "prchk_guard": false, 00:15:28.165 "hdgst": false, 00:15:28.165 "ddgst": false, 00:15:28.165 "dhchap_key": "key3", 00:15:28.165 "allow_unrecognized_csi": false, 00:15:28.165 "method": "bdev_nvme_attach_controller", 00:15:28.165 "req_id": 1 00:15:28.165 } 00:15:28.165 Got JSON-RPC error response 00:15:28.165 response: 00:15:28.165 { 00:15:28.165 "code": -5, 00:15:28.165 "message": "Input/output error" 00:15:28.165 } 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:28.424 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.684 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.944 request: 00:15:28.944 { 00:15:28.944 "name": "nvme0", 00:15:28.944 "trtype": "tcp", 00:15:28.944 "traddr": "10.0.0.3", 00:15:28.944 "adrfam": "ipv4", 00:15:28.944 "trsvcid": "4420", 00:15:28.944 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:28.944 "prchk_reftag": false, 00:15:28.944 "prchk_guard": false, 00:15:28.944 "hdgst": false, 00:15:28.944 "ddgst": false, 00:15:28.944 "dhchap_key": "key3", 00:15:28.944 "allow_unrecognized_csi": false, 00:15:28.944 "method": "bdev_nvme_attach_controller", 00:15:28.944 "req_id": 1 00:15:28.944 } 00:15:28.944 Got JSON-RPC error response 00:15:28.944 response: 00:15:28.944 { 00:15:28.944 "code": -5, 00:15:28.944 "message": "Input/output error" 00:15:28.944 } 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.944 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:29.203 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.204 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:29.204 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.204 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.204 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.204 02:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.774 request: 00:15:29.774 { 00:15:29.774 "name": "nvme0", 00:15:29.774 "trtype": "tcp", 00:15:29.774 "traddr": "10.0.0.3", 00:15:29.774 "adrfam": "ipv4", 00:15:29.774 "trsvcid": "4420", 00:15:29.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:29.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:29.774 "prchk_reftag": false, 00:15:29.774 "prchk_guard": false, 00:15:29.774 "hdgst": false, 00:15:29.774 "ddgst": false, 00:15:29.774 "dhchap_key": "key0", 00:15:29.774 "dhchap_ctrlr_key": "key1", 00:15:29.774 "allow_unrecognized_csi": false, 00:15:29.774 "method": "bdev_nvme_attach_controller", 00:15:29.774 "req_id": 1 00:15:29.774 } 00:15:29.774 Got JSON-RPC error response 00:15:29.774 response: 00:15:29.774 { 00:15:29.774 "code": -5, 00:15:29.774 "message": "Input/output error" 00:15:29.774 } 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:29.774 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:30.032 nvme0n1 00:15:30.032 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:30.032 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:30.032 03:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.290 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.290 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.290 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:30.855 03:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:31.789 nvme0n1 00:15:31.789 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:31.789 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.789 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.048 03:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:32.307 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.307 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:32.307 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid df5c4e32-2325-45d3-96aa-3fdfe3165f53 -l 0 --dhchap-secret DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: --dhchap-ctrl-secret DHHC-1:03:YmYyNDA4NjcwYTc0MGYxZTI4ZjU1NTNiOGNhMGU2YTUxNzBhODBiZDkwNmRiYzAxYzBhNDkwOGI5MzFhZDc3YRaM4MI=: 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.243 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.502 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:34.439 request: 00:15:34.439 { 00:15:34.439 "name": "nvme0", 00:15:34.439 "trtype": "tcp", 00:15:34.439 "traddr": "10.0.0.3", 00:15:34.439 "adrfam": "ipv4", 00:15:34.439 "trsvcid": "4420", 00:15:34.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:34.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53", 00:15:34.439 "prchk_reftag": false, 00:15:34.439 "prchk_guard": false, 00:15:34.439 "hdgst": false, 00:15:34.439 "ddgst": false, 00:15:34.439 "dhchap_key": "key1", 00:15:34.439 "allow_unrecognized_csi": false, 00:15:34.439 "method": "bdev_nvme_attach_controller", 00:15:34.439 "req_id": 1 00:15:34.439 } 00:15:34.439 Got JSON-RPC error response 00:15:34.439 response: 00:15:34.439 { 00:15:34.439 "code": -5, 00:15:34.439 "message": "Input/output error" 00:15:34.439 } 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.439 03:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.392 nvme0n1 00:15:35.392 03:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:35.392 03:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:35.392 03:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.392 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.392 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.392 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:35.958 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:36.217 nvme0n1 00:15:36.217 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:36.217 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:36.217 03:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.475 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.475 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.475 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: '' 2s 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: ]] 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGFlNTRjMzU5NGI2MGNjNTFlZjgzZTY3N2QxMzQyNzIE0SVZ: 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:36.734 03:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: 2s 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: ]] 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGYxODVmODBkMzFjYzgzNmViYTdkYjg3ZDg2NzA1ZmExMWEyMDdkNDk3ZGM0NzVi5v5Eng==: 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:38.661 03:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.196 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.765 nvme0n1 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.765 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.332 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:42.332 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:42.332 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:42.592 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:42.850 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:42.850 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.850 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:43.109 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.110 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.677 request: 00:15:43.677 { 00:15:43.677 "name": "nvme0", 00:15:43.677 "dhchap_key": "key1", 00:15:43.677 "dhchap_ctrlr_key": "key3", 00:15:43.677 "method": "bdev_nvme_set_keys", 00:15:43.677 "req_id": 1 00:15:43.677 } 00:15:43.677 Got JSON-RPC error response 00:15:43.677 response: 00:15:43.677 { 00:15:43.677 "code": -13, 00:15:43.677 "message": "Permission denied" 00:15:43.677 } 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.677 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:44.245 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:44.245 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:45.177 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:45.177 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:45.177 03:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.435 03:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:46.372 nvme0n1 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.372 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.942 request: 00:15:46.942 { 00:15:46.942 "name": "nvme0", 00:15:46.942 "dhchap_key": "key2", 00:15:46.942 "dhchap_ctrlr_key": "key0", 00:15:46.942 "method": "bdev_nvme_set_keys", 00:15:46.942 "req_id": 1 00:15:46.942 } 00:15:46.942 Got JSON-RPC error response 00:15:46.942 response: 00:15:46.942 { 00:15:46.942 "code": -13, 00:15:46.942 "message": "Permission denied" 00:15:46.942 } 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.942 03:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:47.201 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:47.201 03:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69859 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69859 ']' 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69859 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69859 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:48.578 killing process with pid 69859 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69859' 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69859 00:15:48.578 03:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69859 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.483 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.483 rmmod nvme_tcp 00:15:50.483 rmmod nvme_fabrics 00:15:50.741 rmmod nvme_keyring 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 72876 ']' 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 72876 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 72876 ']' 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 72876 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72876 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.741 killing process with pid 72876 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72876' 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 72876 00:15:50.741 03:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 72876 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:51.706 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nvN /tmp/spdk.key-sha256.l6h /tmp/spdk.key-sha384.4ij /tmp/spdk.key-sha512.EUe /tmp/spdk.key-sha512.yd2 /tmp/spdk.key-sha384.QFo /tmp/spdk.key-sha256.1vm '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:51.965 00:15:51.965 real 3m14.546s 00:15:51.965 user 7m43.341s 00:15:51.965 sys 0m28.794s 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.965 ************************************ 00:15:51.965 END TEST nvmf_auth_target 00:15:51.965 ************************************ 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.965 ************************************ 00:15:51.965 START TEST nvmf_bdevio_no_huge 00:15:51.965 ************************************ 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:51.965 * Looking for test storage... 00:15:51.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.965 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:51.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.966 --rc genhtml_branch_coverage=1 00:15:51.966 --rc genhtml_function_coverage=1 00:15:51.966 --rc genhtml_legend=1 00:15:51.966 --rc geninfo_all_blocks=1 00:15:51.966 --rc geninfo_unexecuted_blocks=1 00:15:51.966 00:15:51.966 ' 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:51.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.966 --rc genhtml_branch_coverage=1 00:15:51.966 --rc genhtml_function_coverage=1 00:15:51.966 --rc genhtml_legend=1 00:15:51.966 --rc geninfo_all_blocks=1 00:15:51.966 --rc geninfo_unexecuted_blocks=1 00:15:51.966 00:15:51.966 ' 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:51.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.966 --rc genhtml_branch_coverage=1 00:15:51.966 --rc genhtml_function_coverage=1 00:15:51.966 --rc genhtml_legend=1 00:15:51.966 --rc geninfo_all_blocks=1 00:15:51.966 --rc geninfo_unexecuted_blocks=1 00:15:51.966 00:15:51.966 ' 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:51.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.966 --rc genhtml_branch_coverage=1 00:15:51.966 --rc genhtml_function_coverage=1 00:15:51.966 --rc genhtml_legend=1 00:15:51.966 --rc geninfo_all_blocks=1 00:15:51.966 --rc geninfo_unexecuted_blocks=1 00:15:51.966 00:15:51.966 ' 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.966 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.226 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.227 Cannot find device "nvmf_init_br" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.227 Cannot find device "nvmf_init_br2" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.227 Cannot find device "nvmf_tgt_br" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.227 Cannot find device "nvmf_tgt_br2" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.227 Cannot find device "nvmf_init_br" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.227 Cannot find device "nvmf_init_br2" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.227 Cannot find device "nvmf_tgt_br" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.227 Cannot find device "nvmf_tgt_br2" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.227 Cannot find device "nvmf_br" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.227 Cannot find device "nvmf_init_if" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.227 Cannot find device "nvmf_init_if2" 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.227 03:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.227 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:52.487 00:15:52.487 --- 10.0.0.3 ping statistics --- 00:15:52.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.487 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.487 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.487 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:52.487 00:15:52.487 --- 10.0.0.4 ping statistics --- 00:15:52.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.487 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:52.487 00:15:52.487 --- 10.0.0.1 ping statistics --- 00:15:52.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.487 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:15:52.487 00:15:52.487 --- 10.0.0.2 ping statistics --- 00:15:52.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.487 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.487 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=73553 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 73553 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 73553 ']' 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.488 03:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.747 [2024-12-05 03:00:23.354379] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:52.747 [2024-12-05 03:00:23.354551] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:52.747 [2024-12-05 03:00:23.574075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.007 [2024-12-05 03:00:23.702306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.007 [2024-12-05 03:00:23.702420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.007 [2024-12-05 03:00:23.702438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.007 [2024-12-05 03:00:23.702454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.007 [2024-12-05 03:00:23.702466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.007 [2024-12-05 03:00:23.704084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:53.007 [2024-12-05 03:00:23.704321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:53.007 [2024-12-05 03:00:23.704437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:53.007 [2024-12-05 03:00:23.704438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.266 [2024-12-05 03:00:23.884330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 [2024-12-05 03:00:24.420283] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 Malloc0 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.835 [2024-12-05 03:00:24.512072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:53.835 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:53.835 { 00:15:53.835 "params": { 00:15:53.835 "name": "Nvme$subsystem", 00:15:53.836 "trtype": "$TEST_TRANSPORT", 00:15:53.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.836 "adrfam": "ipv4", 00:15:53.836 "trsvcid": "$NVMF_PORT", 00:15:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.836 "hdgst": ${hdgst:-false}, 00:15:53.836 "ddgst": ${ddgst:-false} 00:15:53.836 }, 00:15:53.836 "method": "bdev_nvme_attach_controller" 00:15:53.836 } 00:15:53.836 EOF 00:15:53.836 )") 00:15:53.836 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:53.836 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:53.836 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:53.836 03:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:53.836 "params": { 00:15:53.836 "name": "Nvme1", 00:15:53.836 "trtype": "tcp", 00:15:53.836 "traddr": "10.0.0.3", 00:15:53.836 "adrfam": "ipv4", 00:15:53.836 "trsvcid": "4420", 00:15:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.836 "hdgst": false, 00:15:53.836 "ddgst": false 00:15:53.836 }, 00:15:53.836 "method": "bdev_nvme_attach_controller" 00:15:53.836 }' 00:15:53.836 [2024-12-05 03:00:24.637942] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:53.836 [2024-12-05 03:00:24.638192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73589 ] 00:15:54.095 [2024-12-05 03:00:24.840714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.354 [2024-12-05 03:00:25.079821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.354 [2024-12-05 03:00:25.079968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.354 [2024-12-05 03:00:25.080188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.613 [2024-12-05 03:00:25.311905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.872 I/O targets: 00:15:54.872 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:54.872 00:15:54.872 00:15:54.872 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.872 http://cunit.sourceforge.net/ 00:15:54.872 00:15:54.872 00:15:54.872 Suite: bdevio tests on: Nvme1n1 00:15:54.872 Test: blockdev write read block ...passed 00:15:54.872 Test: blockdev write zeroes read block ...passed 00:15:54.872 Test: blockdev write zeroes read no split ...passed 00:15:54.872 Test: blockdev write zeroes read split ...passed 00:15:54.872 Test: blockdev write zeroes read split partial ...passed 00:15:54.872 Test: blockdev reset ...[2024-12-05 03:00:25.677650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:54.872 [2024-12-05 03:00:25.677841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:15:54.872 [2024-12-05 03:00:25.689514] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:54.872 passed 00:15:54.872 Test: blockdev write read 8 blocks ...passed 00:15:54.872 Test: blockdev write read size > 128k ...passed 00:15:54.872 Test: blockdev write read invalid size ...passed 00:15:54.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.872 Test: blockdev write read max offset ...passed 00:15:54.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.872 Test: blockdev writev readv 8 blocks ...passed 00:15:54.872 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.872 Test: blockdev writev readv block ...passed 00:15:54.872 Test: blockdev writev readv size > 128k ...passed 00:15:54.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.872 Test: blockdev comparev and writev ...[2024-12-05 03:00:25.701689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.701773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.701812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.701836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.702506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.702557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.702601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.702622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.703244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.703291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.703345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.703833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.703878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.703906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.872 [2024-12-05 03:00:25.703925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.872 passed 00:15:54.872 Test: blockdev nvme passthru rw ...passed 00:15:54.872 Test: blockdev nvme passthru vendor specific ...[2024-12-05 03:00:25.704944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.872 [2024-12-05 03:00:25.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.705151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.872 [2024-12-05 03:00:25.705189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.705336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.872 [2024-12-05 03:00:25.705367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.872 [2024-12-05 03:00:25.705511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.872 [2024-12-05 03:00:25.705546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.872 passed 00:15:55.131 Test: blockdev nvme admin passthru ...passed 00:15:55.131 Test: blockdev copy ...passed 00:15:55.131 00:15:55.131 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.131 suites 1 1 n/a 0 0 00:15:55.131 tests 23 23 23 0 0 00:15:55.131 asserts 152 152 152 0 n/a 00:15:55.131 00:15:55.131 Elapsed time = 0.270 seconds 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.700 rmmod nvme_tcp 00:15:55.700 rmmod nvme_fabrics 00:15:55.700 rmmod nvme_keyring 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 73553 ']' 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 73553 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 73553 ']' 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 73553 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.700 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73553 00:15:55.959 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:55.959 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:55.959 killing process with pid 73553 00:15:55.959 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73553' 00:15:55.959 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 73553 00:15:55.959 03:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 73553 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.896 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:57.155 00:15:57.155 real 0m5.131s 00:15:57.155 user 0m17.931s 00:15:57.155 sys 0m1.619s 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:57.155 ************************************ 00:15:57.155 END TEST nvmf_bdevio_no_huge 00:15:57.155 ************************************ 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.155 ************************************ 00:15:57.155 START TEST nvmf_tls 00:15:57.155 ************************************ 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:57.155 * Looking for test storage... 00:15:57.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:57.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.155 --rc genhtml_branch_coverage=1 00:15:57.155 --rc genhtml_function_coverage=1 00:15:57.155 --rc genhtml_legend=1 00:15:57.155 --rc geninfo_all_blocks=1 00:15:57.155 --rc geninfo_unexecuted_blocks=1 00:15:57.155 00:15:57.155 ' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:57.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.155 --rc genhtml_branch_coverage=1 00:15:57.155 --rc genhtml_function_coverage=1 00:15:57.155 --rc genhtml_legend=1 00:15:57.155 --rc geninfo_all_blocks=1 00:15:57.155 --rc geninfo_unexecuted_blocks=1 00:15:57.155 00:15:57.155 ' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:57.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.155 --rc genhtml_branch_coverage=1 00:15:57.155 --rc genhtml_function_coverage=1 00:15:57.155 --rc genhtml_legend=1 00:15:57.155 --rc geninfo_all_blocks=1 00:15:57.155 --rc geninfo_unexecuted_blocks=1 00:15:57.155 00:15:57.155 ' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:57.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.155 --rc genhtml_branch_coverage=1 00:15:57.155 --rc genhtml_function_coverage=1 00:15:57.155 --rc genhtml_legend=1 00:15:57.155 --rc geninfo_all_blocks=1 00:15:57.155 --rc geninfo_unexecuted_blocks=1 00:15:57.155 00:15:57.155 ' 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.155 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.414 03:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.414 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.414 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.414 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.414 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.415 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.415 Cannot find device "nvmf_init_br" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.415 Cannot find device "nvmf_init_br2" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.415 Cannot find device "nvmf_tgt_br" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.415 Cannot find device "nvmf_tgt_br2" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.415 Cannot find device "nvmf_init_br" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.415 Cannot find device "nvmf_init_br2" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.415 Cannot find device "nvmf_tgt_br" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.415 Cannot find device "nvmf_tgt_br2" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.415 Cannot find device "nvmf_br" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.415 Cannot find device "nvmf_init_if" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.415 Cannot find device "nvmf_init_if2" 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.415 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.416 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.674 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.674 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.674 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.674 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.674 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:57.675 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.675 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:15:57.675 00:15:57.675 --- 10.0.0.3 ping statistics --- 00:15:57.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.675 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:57.675 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:57.675 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:15:57.675 00:15:57.675 --- 10.0.0.4 ping statistics --- 00:15:57.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.675 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:57.675 00:15:57.675 --- 10.0.0.1 ping statistics --- 00:15:57.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.675 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:57.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:57.675 00:15:57.675 --- 10.0.0.2 ping statistics --- 00:15:57.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.675 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73872 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73872 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73872 ']' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.675 03:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.934 [2024-12-05 03:00:28.547150] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:57.934 [2024-12-05 03:00:28.547312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.934 [2024-12-05 03:00:28.738375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.193 [2024-12-05 03:00:28.867791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.193 [2024-12-05 03:00:28.867870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.193 [2024-12-05 03:00:28.867906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.193 [2024-12-05 03:00:28.867949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.193 [2024-12-05 03:00:28.867976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.193 [2024-12-05 03:00:28.869579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.760 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.760 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:58.760 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.760 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.760 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.019 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.019 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:59.019 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:59.279 true 00:15:59.279 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.279 03:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:59.539 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:59.539 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:59.539 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:59.799 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.799 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:00.059 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:00.059 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:00.059 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:00.319 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.319 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:00.578 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:00.578 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:00.578 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.578 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:01.147 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:01.147 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:01.147 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:01.147 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:01.147 03:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.715 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:01.715 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:01.715 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:01.715 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.715 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:01.973 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.EkH5qNGVg0 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.zcjTUcDeB9 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.EkH5qNGVg0 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.zcjTUcDeB9 00:16:02.233 03:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:02.493 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:02.752 [2024-12-05 03:00:33.563259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.012 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.EkH5qNGVg0 00:16:03.012 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EkH5qNGVg0 00:16:03.012 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:03.271 [2024-12-05 03:00:33.976748] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.271 03:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:03.530 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:03.788 [2024-12-05 03:00:34.453092] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:03.788 [2024-12-05 03:00:34.453417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.788 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:04.048 malloc0 00:16:04.048 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:04.314 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EkH5qNGVg0 00:16:04.627 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:04.885 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EkH5qNGVg0 00:16:17.087 Initializing NVMe Controllers 00:16:17.087 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.087 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:17.087 Initialization complete. Launching workers. 00:16:17.087 ======================================================== 00:16:17.087 Latency(us) 00:16:17.087 Device Information : IOPS MiB/s Average min max 00:16:17.087 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7039.15 27.50 9094.89 2284.57 15152.98 00:16:17.087 ======================================================== 00:16:17.087 Total : 7039.15 27.50 9094.89 2284.57 15152.98 00:16:17.087 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EkH5qNGVg0 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EkH5qNGVg0 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74122 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74122 /var/tmp/bdevperf.sock 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74122 ']' 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.087 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.087 [2024-12-05 03:00:46.171183] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:17.087 [2024-12-05 03:00:46.172032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74122 ] 00:16:17.087 [2024-12-05 03:00:46.365749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.087 [2024-12-05 03:00:46.490648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.087 [2024-12-05 03:00:46.652580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.087 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.087 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:17.087 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EkH5qNGVg0 00:16:17.087 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:17.087 [2024-12-05 03:00:47.644979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.087 TLSTESTn1 00:16:17.087 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:17.087 Running I/O for 10 seconds... 00:16:19.402 3200.00 IOPS, 12.50 MiB/s [2024-12-05T03:00:51.184Z] 3102.50 IOPS, 12.12 MiB/s [2024-12-05T03:00:52.122Z] 3032.67 IOPS, 11.85 MiB/s [2024-12-05T03:00:53.069Z] 3008.00 IOPS, 11.75 MiB/s [2024-12-05T03:00:54.012Z] 2943.00 IOPS, 11.50 MiB/s [2024-12-05T03:00:54.950Z] 2977.17 IOPS, 11.63 MiB/s [2024-12-05T03:00:56.329Z] 2998.43 IOPS, 11.71 MiB/s [2024-12-05T03:00:57.267Z] 2982.88 IOPS, 11.65 MiB/s [2024-12-05T03:00:58.201Z] 2982.11 IOPS, 11.65 MiB/s [2024-12-05T03:00:58.201Z] 2971.30 IOPS, 11.61 MiB/s 00:16:27.357 Latency(us) 00:16:27.357 [2024-12-05T03:00:58.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.357 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:27.357 Verification LBA range: start 0x0 length 0x2000 00:16:27.357 TLSTESTn1 : 10.04 2973.42 11.61 0.00 0.00 42942.93 7864.32 46470.98 00:16:27.357 [2024-12-05T03:00:58.201Z] =================================================================================================================== 00:16:27.357 [2024-12-05T03:00:58.201Z] Total : 2973.42 11.61 0.00 0.00 42942.93 7864.32 46470.98 00:16:27.357 { 00:16:27.357 "results": [ 00:16:27.357 { 00:16:27.357 "job": "TLSTESTn1", 00:16:27.357 "core_mask": "0x4", 00:16:27.357 "workload": "verify", 00:16:27.357 "status": "finished", 00:16:27.357 "verify_range": { 00:16:27.357 "start": 0, 00:16:27.357 "length": 8192 00:16:27.357 }, 00:16:27.357 "queue_depth": 128, 00:16:27.357 "io_size": 4096, 00:16:27.357 "runtime": 10.035254, 00:16:27.357 "iops": 2973.4175138965093, 00:16:27.357 "mibps": 11.61491216365824, 00:16:27.357 "io_failed": 0, 00:16:27.357 "io_timeout": 0, 00:16:27.357 "avg_latency_us": 42942.92677466038, 00:16:27.357 "min_latency_us": 7864.32, 00:16:27.357 "max_latency_us": 46470.98181818182 00:16:27.357 } 00:16:27.357 ], 00:16:27.357 "core_count": 1 00:16:27.357 } 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74122 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74122 ']' 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74122 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74122 00:16:27.357 killing process with pid 74122 00:16:27.357 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.357 00:16:27.357 Latency(us) 00:16:27.357 [2024-12-05T03:00:58.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.357 [2024-12-05T03:00:58.201Z] =================================================================================================================== 00:16:27.357 [2024-12-05T03:00:58.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74122' 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74122 00:16:27.357 03:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74122 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zcjTUcDeB9 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zcjTUcDeB9 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zcjTUcDeB9 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.291 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zcjTUcDeB9 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74271 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74271 /var/tmp/bdevperf.sock 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74271 ']' 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.292 03:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.550 [2024-12-05 03:00:59.158252] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:28.550 [2024-12-05 03:00:59.158414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74271 ] 00:16:28.550 [2024-12-05 03:00:59.342708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.808 [2024-12-05 03:00:59.447792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.808 [2024-12-05 03:00:59.624564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.375 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.375 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.375 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zcjTUcDeB9 00:16:29.633 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:29.892 [2024-12-05 03:01:00.696668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.892 [2024-12-05 03:01:00.707308] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:29.892 [2024-12-05 03:01:00.707884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:29.892 [2024-12-05 03:01:00.708845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:29.892 [2024-12-05 03:01:00.709844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:29.892 [2024-12-05 03:01:00.709896] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:29.892 [2024-12-05 03:01:00.709916] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:29.892 [2024-12-05 03:01:00.709936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:29.892 request: 00:16:29.892 { 00:16:29.892 "name": "TLSTEST", 00:16:29.892 "trtype": "tcp", 00:16:29.892 "traddr": "10.0.0.3", 00:16:29.892 "adrfam": "ipv4", 00:16:29.892 "trsvcid": "4420", 00:16:29.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.892 "prchk_reftag": false, 00:16:29.892 "prchk_guard": false, 00:16:29.892 "hdgst": false, 00:16:29.892 "ddgst": false, 00:16:29.892 "psk": "key0", 00:16:29.892 "allow_unrecognized_csi": false, 00:16:29.892 "method": "bdev_nvme_attach_controller", 00:16:29.892 "req_id": 1 00:16:29.892 } 00:16:29.892 Got JSON-RPC error response 00:16:29.892 response: 00:16:29.892 { 00:16:29.892 "code": -5, 00:16:29.892 "message": "Input/output error" 00:16:29.892 } 00:16:29.892 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74271 00:16:29.892 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74271 ']' 00:16:29.892 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74271 00:16:29.892 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:29.892 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74271 00:16:30.151 killing process with pid 74271 00:16:30.151 Received shutdown signal, test time was about 10.000000 seconds 00:16:30.151 00:16:30.151 Latency(us) 00:16:30.151 [2024-12-05T03:01:00.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.151 [2024-12-05T03:01:00.995Z] =================================================================================================================== 00:16:30.151 [2024-12-05T03:01:00.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74271' 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74271 00:16:30.151 03:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74271 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EkH5qNGVg0 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EkH5qNGVg0 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EkH5qNGVg0 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EkH5qNGVg0 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74311 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74311 /var/tmp/bdevperf.sock 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74311 ']' 00:16:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.089 03:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.089 [2024-12-05 03:01:01.663279] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:31.089 [2024-12-05 03:01:01.663901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74311 ] 00:16:31.089 [2024-12-05 03:01:01.835892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.089 [2024-12-05 03:01:01.931259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.348 [2024-12-05 03:01:02.093380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.916 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.916 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:31.916 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EkH5qNGVg0 00:16:32.174 03:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:32.434 [2024-12-05 03:01:03.028281] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.434 [2024-12-05 03:01:03.037253] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:32.434 [2024-12-05 03:01:03.037317] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:32.434 [2024-12-05 03:01:03.037419] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:32.434 [2024-12-05 03:01:03.038288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:32.434 [2024-12-05 03:01:03.039268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:32.434 [2024-12-05 03:01:03.040256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:32.434 [2024-12-05 03:01:03.040314] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:32.434 [2024-12-05 03:01:03.040349] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:32.434 [2024-12-05 03:01:03.040369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:32.434 request: 00:16:32.434 { 00:16:32.434 "name": "TLSTEST", 00:16:32.434 "trtype": "tcp", 00:16:32.434 "traddr": "10.0.0.3", 00:16:32.434 "adrfam": "ipv4", 00:16:32.434 "trsvcid": "4420", 00:16:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:32.434 "prchk_reftag": false, 00:16:32.434 "prchk_guard": false, 00:16:32.434 "hdgst": false, 00:16:32.434 "ddgst": false, 00:16:32.434 "psk": "key0", 00:16:32.434 "allow_unrecognized_csi": false, 00:16:32.434 "method": "bdev_nvme_attach_controller", 00:16:32.434 "req_id": 1 00:16:32.434 } 00:16:32.434 Got JSON-RPC error response 00:16:32.434 response: 00:16:32.434 { 00:16:32.434 "code": -5, 00:16:32.434 "message": "Input/output error" 00:16:32.434 } 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74311 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74311 ']' 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74311 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74311 00:16:32.434 killing process with pid 74311 00:16:32.434 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.434 00:16:32.434 Latency(us) 00:16:32.434 [2024-12-05T03:01:03.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.434 [2024-12-05T03:01:03.278Z] =================================================================================================================== 00:16:32.434 [2024-12-05T03:01:03.278Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74311' 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74311 00:16:32.434 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74311 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EkH5qNGVg0 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EkH5qNGVg0 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EkH5qNGVg0 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EkH5qNGVg0 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74352 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74352 /var/tmp/bdevperf.sock 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74352 ']' 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.373 03:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.373 [2024-12-05 03:01:04.080599] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:33.373 [2024-12-05 03:01:04.080765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74352 ] 00:16:33.632 [2024-12-05 03:01:04.251264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.632 [2024-12-05 03:01:04.344827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.890 [2024-12-05 03:01:04.503669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:34.457 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.457 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:34.457 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EkH5qNGVg0 00:16:34.716 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:34.975 [2024-12-05 03:01:05.631225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:34.975 [2024-12-05 03:01:05.639788] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:34.975 [2024-12-05 03:01:05.639862] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:34.975 [2024-12-05 03:01:05.639940] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:34.975 [2024-12-05 03:01:05.640181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:34.975 [2024-12-05 03:01:05.641156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:34.975 [2024-12-05 03:01:05.642138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:34.976 [2024-12-05 03:01:05.642187] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:34.976 [2024-12-05 03:01:05.642223] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:34.976 [2024-12-05 03:01:05.642240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:34.976 request: 00:16:34.976 { 00:16:34.976 "name": "TLSTEST", 00:16:34.976 "trtype": "tcp", 00:16:34.976 "traddr": "10.0.0.3", 00:16:34.976 "adrfam": "ipv4", 00:16:34.976 "trsvcid": "4420", 00:16:34.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:34.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.976 "prchk_reftag": false, 00:16:34.976 "prchk_guard": false, 00:16:34.976 "hdgst": false, 00:16:34.976 "ddgst": false, 00:16:34.976 "psk": "key0", 00:16:34.976 "allow_unrecognized_csi": false, 00:16:34.976 "method": "bdev_nvme_attach_controller", 00:16:34.976 "req_id": 1 00:16:34.976 } 00:16:34.976 Got JSON-RPC error response 00:16:34.976 response: 00:16:34.976 { 00:16:34.976 "code": -5, 00:16:34.976 "message": "Input/output error" 00:16:34.976 } 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74352 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74352 ']' 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74352 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74352 00:16:34.976 killing process with pid 74352 00:16:34.976 Received shutdown signal, test time was about 10.000000 seconds 00:16:34.976 00:16:34.976 Latency(us) 00:16:34.976 [2024-12-05T03:01:05.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.976 [2024-12-05T03:01:05.820Z] =================================================================================================================== 00:16:34.976 [2024-12-05T03:01:05.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74352' 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74352 00:16:34.976 03:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74352 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74387 00:16:35.913 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74387 /var/tmp/bdevperf.sock 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74387 ']' 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.914 03:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.914 [2024-12-05 03:01:06.618665] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:35.914 [2024-12-05 03:01:06.618829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74387 ] 00:16:36.173 [2024-12-05 03:01:06.781285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.173 [2024-12-05 03:01:06.875782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.432 [2024-12-05 03:01:07.035254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.691 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.691 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:36.691 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:36.950 [2024-12-05 03:01:07.752227] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:36.950 [2024-12-05 03:01:07.752301] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:36.950 request: 00:16:36.950 { 00:16:36.950 "name": "key0", 00:16:36.950 "path": "", 00:16:36.950 "method": "keyring_file_add_key", 00:16:36.950 "req_id": 1 00:16:36.950 } 00:16:36.950 Got JSON-RPC error response 00:16:36.950 response: 00:16:36.950 { 00:16:36.950 "code": -1, 00:16:36.950 "message": "Operation not permitted" 00:16:36.950 } 00:16:36.950 03:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:37.209 [2024-12-05 03:01:07.988417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.209 [2024-12-05 03:01:07.988526] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:37.209 request: 00:16:37.209 { 00:16:37.209 "name": "TLSTEST", 00:16:37.209 "trtype": "tcp", 00:16:37.209 "traddr": "10.0.0.3", 00:16:37.209 "adrfam": "ipv4", 00:16:37.209 "trsvcid": "4420", 00:16:37.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.209 "prchk_reftag": false, 00:16:37.209 "prchk_guard": false, 00:16:37.209 "hdgst": false, 00:16:37.209 "ddgst": false, 00:16:37.209 "psk": "key0", 00:16:37.209 "allow_unrecognized_csi": false, 00:16:37.209 "method": "bdev_nvme_attach_controller", 00:16:37.209 "req_id": 1 00:16:37.209 } 00:16:37.209 Got JSON-RPC error response 00:16:37.209 response: 00:16:37.209 { 00:16:37.209 "code": -126, 00:16:37.209 "message": "Required key not available" 00:16:37.209 } 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74387 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74387 ']' 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74387 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74387 00:16:37.209 killing process with pid 74387 00:16:37.209 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.209 00:16:37.209 Latency(us) 00:16:37.209 [2024-12-05T03:01:08.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.209 [2024-12-05T03:01:08.053Z] =================================================================================================================== 00:16:37.209 [2024-12-05T03:01:08.053Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74387' 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74387 00:16:37.209 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74387 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 73872 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73872 ']' 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73872 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73872 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73872' 00:16:38.146 killing process with pid 73872 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73872 00:16:38.146 03:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73872 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.L9GucbBzHC 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.L9GucbBzHC 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74451 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74451 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74451 ']' 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.524 03:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 [2024-12-05 03:01:10.206345] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:39.524 [2024-12-05 03:01:10.206537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.782 [2024-12-05 03:01:10.391630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.782 [2024-12-05 03:01:10.515578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.782 [2024-12-05 03:01:10.515666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.782 [2024-12-05 03:01:10.515691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.782 [2024-12-05 03:01:10.515722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.782 [2024-12-05 03:01:10.515740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.782 [2024-12-05 03:01:10.517178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.041 [2024-12-05 03:01:10.692952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9GucbBzHC 00:16:40.609 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:40.868 [2024-12-05 03:01:11.503046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.868 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.127 03:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:41.386 [2024-12-05 03:01:12.051238] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:41.386 [2024-12-05 03:01:12.051559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.386 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:41.645 malloc0 00:16:41.645 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:41.903 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:16:42.162 03:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9GucbBzHC 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.L9GucbBzHC 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74511 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74511 /var/tmp/bdevperf.sock 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74511 ']' 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.729 03:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.729 [2024-12-05 03:01:13.403878] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:42.729 [2024-12-05 03:01:13.405039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74511 ] 00:16:42.987 [2024-12-05 03:01:13.589862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.987 [2024-12-05 03:01:13.695908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.245 [2024-12-05 03:01:13.877582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.811 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.811 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.811 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:16:44.070 03:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:44.326 [2024-12-05 03:01:15.040321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:44.326 TLSTESTn1 00:16:44.326 03:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:44.582 Running I/O for 10 seconds... 00:16:46.448 2873.00 IOPS, 11.22 MiB/s [2024-12-05T03:01:18.670Z] 2967.00 IOPS, 11.59 MiB/s [2024-12-05T03:01:19.605Z] 2970.33 IOPS, 11.60 MiB/s [2024-12-05T03:01:20.541Z] 2971.25 IOPS, 11.61 MiB/s [2024-12-05T03:01:21.478Z] 2955.60 IOPS, 11.55 MiB/s [2024-12-05T03:01:22.414Z] 2981.33 IOPS, 11.65 MiB/s [2024-12-05T03:01:23.351Z] 3004.29 IOPS, 11.74 MiB/s [2024-12-05T03:01:24.314Z] 3028.00 IOPS, 11.83 MiB/s [2024-12-05T03:01:25.692Z] 3034.22 IOPS, 11.85 MiB/s [2024-12-05T03:01:25.692Z] 3043.00 IOPS, 11.89 MiB/s 00:16:54.848 Latency(us) 00:16:54.848 [2024-12-05T03:01:25.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.848 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:54.848 Verification LBA range: start 0x0 length 0x2000 00:16:54.848 TLSTESTn1 : 10.02 3048.96 11.91 0.00 0.00 41899.77 8102.63 41943.04 00:16:54.848 [2024-12-05T03:01:25.692Z] =================================================================================================================== 00:16:54.848 [2024-12-05T03:01:25.692Z] Total : 3048.96 11.91 0.00 0.00 41899.77 8102.63 41943.04 00:16:54.848 { 00:16:54.848 "results": [ 00:16:54.848 { 00:16:54.848 "job": "TLSTESTn1", 00:16:54.848 "core_mask": "0x4", 00:16:54.848 "workload": "verify", 00:16:54.848 "status": "finished", 00:16:54.848 "verify_range": { 00:16:54.848 "start": 0, 00:16:54.848 "length": 8192 00:16:54.848 }, 00:16:54.848 "queue_depth": 128, 00:16:54.848 "io_size": 4096, 00:16:54.848 "runtime": 10.022426, 00:16:54.848 "iops": 3048.9623969286476, 00:16:54.848 "mibps": 11.91000936300253, 00:16:54.848 "io_failed": 0, 00:16:54.848 "io_timeout": 0, 00:16:54.849 "avg_latency_us": 41899.76695381064, 00:16:54.849 "min_latency_us": 8102.632727272728, 00:16:54.849 "max_latency_us": 41943.04 00:16:54.849 } 00:16:54.849 ], 00:16:54.849 "core_count": 1 00:16:54.849 } 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74511 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74511 ']' 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74511 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74511 00:16:54.849 killing process with pid 74511 00:16:54.849 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.849 00:16:54.849 Latency(us) 00:16:54.849 [2024-12-05T03:01:25.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.849 [2024-12-05T03:01:25.693Z] =================================================================================================================== 00:16:54.849 [2024-12-05T03:01:25.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74511' 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74511 00:16:54.849 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74511 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.L9GucbBzHC 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9GucbBzHC 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9GucbBzHC 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9GucbBzHC 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.L9GucbBzHC 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74660 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74660 /var/tmp/bdevperf.sock 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74660 ']' 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.414 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 [2024-12-05 03:01:26.359793] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:55.671 [2024-12-05 03:01:26.360251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74660 ] 00:16:55.930 [2024-12-05 03:01:26.540921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.930 [2024-12-05 03:01:26.639774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.189 [2024-12-05 03:01:26.813222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.757 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.757 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:56.757 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:16:56.757 [2024-12-05 03:01:27.524035] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.L9GucbBzHC': 0100666 00:16:56.757 [2024-12-05 03:01:27.524309] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:56.757 request: 00:16:56.757 { 00:16:56.757 "name": "key0", 00:16:56.757 "path": "/tmp/tmp.L9GucbBzHC", 00:16:56.757 "method": "keyring_file_add_key", 00:16:56.757 "req_id": 1 00:16:56.757 } 00:16:56.757 Got JSON-RPC error response 00:16:56.757 response: 00:16:56.757 { 00:16:56.757 "code": -1, 00:16:56.757 "message": "Operation not permitted" 00:16:56.757 } 00:16:56.757 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:57.015 [2024-12-05 03:01:27.804278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.015 [2024-12-05 03:01:27.804384] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:57.015 request: 00:16:57.015 { 00:16:57.015 "name": "TLSTEST", 00:16:57.015 "trtype": "tcp", 00:16:57.015 "traddr": "10.0.0.3", 00:16:57.015 "adrfam": "ipv4", 00:16:57.015 "trsvcid": "4420", 00:16:57.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.016 "prchk_reftag": false, 00:16:57.016 "prchk_guard": false, 00:16:57.016 "hdgst": false, 00:16:57.016 "ddgst": false, 00:16:57.016 "psk": "key0", 00:16:57.016 "allow_unrecognized_csi": false, 00:16:57.016 "method": "bdev_nvme_attach_controller", 00:16:57.016 "req_id": 1 00:16:57.016 } 00:16:57.016 Got JSON-RPC error response 00:16:57.016 response: 00:16:57.016 { 00:16:57.016 "code": -126, 00:16:57.016 "message": "Required key not available" 00:16:57.016 } 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74660 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74660 ']' 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74660 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.016 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74660 00:16:57.275 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:57.275 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:57.275 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74660' 00:16:57.275 killing process with pid 74660 00:16:57.275 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74660 00:16:57.275 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.275 00:16:57.275 Latency(us) 00:16:57.275 [2024-12-05T03:01:28.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.275 [2024-12-05T03:01:28.119Z] =================================================================================================================== 00:16:57.275 [2024-12-05T03:01:28.119Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.275 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74660 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74451 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74451 ']' 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74451 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74451 00:16:58.212 killing process with pid 74451 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74451' 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74451 00:16:58.212 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74451 00:16:59.148 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:59.148 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.148 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74712 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74712 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74712 ']' 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.149 03:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.407 [2024-12-05 03:01:30.081958] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:59.407 [2024-12-05 03:01:30.082099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.667 [2024-12-05 03:01:30.257864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.667 [2024-12-05 03:01:30.362386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.667 [2024-12-05 03:01:30.362444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.667 [2024-12-05 03:01:30.362465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.667 [2024-12-05 03:01:30.362489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.667 [2024-12-05 03:01:30.362504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.667 [2024-12-05 03:01:30.363720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.927 [2024-12-05 03:01:30.575673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9GucbBzHC 00:17:00.494 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:00.752 [2024-12-05 03:01:31.348516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.752 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:01.011 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:01.270 [2024-12-05 03:01:31.888677] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:01.270 [2024-12-05 03:01:31.889030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:01.270 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:01.529 malloc0 00:17:01.529 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:01.787 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:02.046 [2024-12-05 03:01:32.781342] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.L9GucbBzHC': 0100666 00:17:02.046 [2024-12-05 03:01:32.781418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:02.046 request: 00:17:02.046 { 00:17:02.046 "name": "key0", 00:17:02.046 "path": "/tmp/tmp.L9GucbBzHC", 00:17:02.046 "method": "keyring_file_add_key", 00:17:02.046 "req_id": 1 00:17:02.046 } 00:17:02.046 Got JSON-RPC error response 00:17:02.046 response: 00:17:02.046 { 00:17:02.046 "code": -1, 00:17:02.046 "message": "Operation not permitted" 00:17:02.046 } 00:17:02.046 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:02.304 [2024-12-05 03:01:33.053469] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:02.304 [2024-12-05 03:01:33.053567] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:02.304 request: 00:17:02.304 { 00:17:02.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.304 "host": "nqn.2016-06.io.spdk:host1", 00:17:02.304 "psk": "key0", 00:17:02.304 "method": "nvmf_subsystem_add_host", 00:17:02.304 "req_id": 1 00:17:02.304 } 00:17:02.304 Got JSON-RPC error response 00:17:02.304 response: 00:17:02.304 { 00:17:02.304 "code": -32603, 00:17:02.304 "message": "Internal error" 00:17:02.304 } 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 74712 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74712 ']' 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74712 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74712 00:17:02.304 killing process with pid 74712 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74712' 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74712 00:17:02.304 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74712 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.L9GucbBzHC 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74791 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74791 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74791 ']' 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.236 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 [2024-12-05 03:01:34.199248] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:03.494 [2024-12-05 03:01:34.199425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.752 [2024-12-05 03:01:34.381727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.752 [2024-12-05 03:01:34.480501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.752 [2024-12-05 03:01:34.480787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.752 [2024-12-05 03:01:34.480823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.752 [2024-12-05 03:01:34.480848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.752 [2024-12-05 03:01:34.480863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.752 [2024-12-05 03:01:34.482114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.010 [2024-12-05 03:01:34.654180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.269 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.269 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:04.269 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.269 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.269 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.528 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.528 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:17:04.528 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9GucbBzHC 00:17:04.528 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.785 [2024-12-05 03:01:35.424883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.785 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:05.043 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:05.305 [2024-12-05 03:01:35.909006] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.305 [2024-12-05 03:01:35.909311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:05.305 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:05.573 malloc0 00:17:05.573 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.831 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:06.090 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=74851 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 74851 /var/tmp/bdevperf.sock 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74851 ']' 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.348 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.348 [2024-12-05 03:01:37.044309] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:06.348 [2024-12-05 03:01:37.044476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74851 ] 00:17:06.608 [2024-12-05 03:01:37.208207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.608 [2024-12-05 03:01:37.295567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.868 [2024-12-05 03:01:37.451601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:07.436 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.436 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:07.436 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:07.696 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:07.955 [2024-12-05 03:01:38.589177] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.955 TLSTESTn1 00:17:07.955 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:08.215 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:08.215 "subsystems": [ 00:17:08.215 { 00:17:08.215 "subsystem": "keyring", 00:17:08.215 "config": [ 00:17:08.215 { 00:17:08.215 "method": "keyring_file_add_key", 00:17:08.215 "params": { 00:17:08.215 "name": "key0", 00:17:08.215 "path": "/tmp/tmp.L9GucbBzHC" 00:17:08.215 } 00:17:08.215 } 00:17:08.215 ] 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "subsystem": "iobuf", 00:17:08.215 "config": [ 00:17:08.215 { 00:17:08.215 "method": "iobuf_set_options", 00:17:08.215 "params": { 00:17:08.215 "small_pool_count": 8192, 00:17:08.215 "large_pool_count": 1024, 00:17:08.215 "small_bufsize": 8192, 00:17:08.215 "large_bufsize": 135168, 00:17:08.215 "enable_numa": false 00:17:08.215 } 00:17:08.215 } 00:17:08.215 ] 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "subsystem": "sock", 00:17:08.215 "config": [ 00:17:08.215 { 00:17:08.215 "method": "sock_set_default_impl", 00:17:08.215 "params": { 00:17:08.215 "impl_name": "uring" 00:17:08.215 } 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "method": "sock_impl_set_options", 00:17:08.215 "params": { 00:17:08.215 "impl_name": "ssl", 00:17:08.215 "recv_buf_size": 4096, 00:17:08.215 "send_buf_size": 4096, 00:17:08.215 "enable_recv_pipe": true, 00:17:08.215 "enable_quickack": false, 00:17:08.215 "enable_placement_id": 0, 00:17:08.215 "enable_zerocopy_send_server": true, 00:17:08.215 "enable_zerocopy_send_client": false, 00:17:08.215 "zerocopy_threshold": 0, 00:17:08.215 "tls_version": 0, 00:17:08.215 "enable_ktls": false 00:17:08.215 } 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "method": "sock_impl_set_options", 00:17:08.215 "params": { 00:17:08.215 "impl_name": "posix", 00:17:08.215 "recv_buf_size": 2097152, 00:17:08.215 "send_buf_size": 2097152, 00:17:08.215 "enable_recv_pipe": true, 00:17:08.215 "enable_quickack": false, 00:17:08.215 "enable_placement_id": 0, 00:17:08.215 "enable_zerocopy_send_server": true, 00:17:08.215 "enable_zerocopy_send_client": false, 00:17:08.215 "zerocopy_threshold": 0, 00:17:08.215 "tls_version": 0, 00:17:08.215 "enable_ktls": false 00:17:08.215 } 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "method": "sock_impl_set_options", 00:17:08.215 "params": { 00:17:08.215 "impl_name": "uring", 00:17:08.215 "recv_buf_size": 2097152, 00:17:08.215 "send_buf_size": 2097152, 00:17:08.215 "enable_recv_pipe": true, 00:17:08.215 "enable_quickack": false, 00:17:08.215 "enable_placement_id": 0, 00:17:08.215 "enable_zerocopy_send_server": false, 00:17:08.215 "enable_zerocopy_send_client": false, 00:17:08.215 "zerocopy_threshold": 0, 00:17:08.215 "tls_version": 0, 00:17:08.215 "enable_ktls": false 00:17:08.215 } 00:17:08.215 } 00:17:08.215 ] 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "subsystem": "vmd", 00:17:08.215 "config": [] 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "subsystem": "accel", 00:17:08.215 "config": [ 00:17:08.215 { 00:17:08.215 "method": "accel_set_options", 00:17:08.215 "params": { 00:17:08.215 "small_cache_size": 128, 00:17:08.215 "large_cache_size": 16, 00:17:08.215 "task_count": 2048, 00:17:08.215 "sequence_count": 2048, 00:17:08.215 "buf_count": 2048 00:17:08.215 } 00:17:08.215 } 00:17:08.215 ] 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "subsystem": "bdev", 00:17:08.215 "config": [ 00:17:08.215 { 00:17:08.215 "method": "bdev_set_options", 00:17:08.215 "params": { 00:17:08.215 "bdev_io_pool_size": 65535, 00:17:08.215 "bdev_io_cache_size": 256, 00:17:08.215 "bdev_auto_examine": true, 00:17:08.215 "iobuf_small_cache_size": 128, 00:17:08.215 "iobuf_large_cache_size": 16 00:17:08.215 } 00:17:08.215 }, 00:17:08.215 { 00:17:08.215 "method": "bdev_raid_set_options", 00:17:08.215 "params": { 00:17:08.215 "process_window_size_kb": 1024, 00:17:08.215 "process_max_bandwidth_mb_sec": 0 00:17:08.215 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "bdev_iscsi_set_options", 00:17:08.216 "params": { 00:17:08.216 "timeout_sec": 30 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "bdev_nvme_set_options", 00:17:08.216 "params": { 00:17:08.216 "action_on_timeout": "none", 00:17:08.216 "timeout_us": 0, 00:17:08.216 "timeout_admin_us": 0, 00:17:08.216 "keep_alive_timeout_ms": 10000, 00:17:08.216 "arbitration_burst": 0, 00:17:08.216 "low_priority_weight": 0, 00:17:08.216 "medium_priority_weight": 0, 00:17:08.216 "high_priority_weight": 0, 00:17:08.216 "nvme_adminq_poll_period_us": 10000, 00:17:08.216 "nvme_ioq_poll_period_us": 0, 00:17:08.216 "io_queue_requests": 0, 00:17:08.216 "delay_cmd_submit": true, 00:17:08.216 "transport_retry_count": 4, 00:17:08.216 "bdev_retry_count": 3, 00:17:08.216 "transport_ack_timeout": 0, 00:17:08.216 "ctrlr_loss_timeout_sec": 0, 00:17:08.216 "reconnect_delay_sec": 0, 00:17:08.216 "fast_io_fail_timeout_sec": 0, 00:17:08.216 "disable_auto_failback": false, 00:17:08.216 "generate_uuids": false, 00:17:08.216 "transport_tos": 0, 00:17:08.216 "nvme_error_stat": false, 00:17:08.216 "rdma_srq_size": 0, 00:17:08.216 "io_path_stat": false, 00:17:08.216 "allow_accel_sequence": false, 00:17:08.216 "rdma_max_cq_size": 0, 00:17:08.216 "rdma_cm_event_timeout_ms": 0, 00:17:08.216 "dhchap_digests": [ 00:17:08.216 "sha256", 00:17:08.216 "sha384", 00:17:08.216 "sha512" 00:17:08.216 ], 00:17:08.216 "dhchap_dhgroups": [ 00:17:08.216 "null", 00:17:08.216 "ffdhe2048", 00:17:08.216 "ffdhe3072", 00:17:08.216 "ffdhe4096", 00:17:08.216 "ffdhe6144", 00:17:08.216 "ffdhe8192" 00:17:08.216 ] 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "bdev_nvme_set_hotplug", 00:17:08.216 "params": { 00:17:08.216 "period_us": 100000, 00:17:08.216 "enable": false 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "bdev_malloc_create", 00:17:08.216 "params": { 00:17:08.216 "name": "malloc0", 00:17:08.216 "num_blocks": 8192, 00:17:08.216 "block_size": 4096, 00:17:08.216 "physical_block_size": 4096, 00:17:08.216 "uuid": "b2356fb6-9c3a-4d1c-af72-8b5cd8e947d0", 00:17:08.216 "optimal_io_boundary": 0, 00:17:08.216 "md_size": 0, 00:17:08.216 "dif_type": 0, 00:17:08.216 "dif_is_head_of_md": false, 00:17:08.216 "dif_pi_format": 0 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "bdev_wait_for_examine" 00:17:08.216 } 00:17:08.216 ] 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "subsystem": "nbd", 00:17:08.216 "config": [] 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "subsystem": "scheduler", 00:17:08.216 "config": [ 00:17:08.216 { 00:17:08.216 "method": "framework_set_scheduler", 00:17:08.216 "params": { 00:17:08.216 "name": "static" 00:17:08.216 } 00:17:08.216 } 00:17:08.216 ] 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "subsystem": "nvmf", 00:17:08.216 "config": [ 00:17:08.216 { 00:17:08.216 "method": "nvmf_set_config", 00:17:08.216 "params": { 00:17:08.216 "discovery_filter": "match_any", 00:17:08.216 "admin_cmd_passthru": { 00:17:08.216 "identify_ctrlr": false 00:17:08.216 }, 00:17:08.216 "dhchap_digests": [ 00:17:08.216 "sha256", 00:17:08.216 "sha384", 00:17:08.216 "sha512" 00:17:08.216 ], 00:17:08.216 "dhchap_dhgroups": [ 00:17:08.216 "null", 00:17:08.216 "ffdhe2048", 00:17:08.216 "ffdhe3072", 00:17:08.216 "ffdhe4096", 00:17:08.216 "ffdhe6144", 00:17:08.216 "ffdhe8192" 00:17:08.216 ] 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "nvmf_set_max_subsystems", 00:17:08.216 "params": { 00:17:08.216 "max_subsystems": 1024 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "nvmf_set_crdt", 00:17:08.216 "params": { 00:17:08.216 "crdt1": 0, 00:17:08.216 "crdt2": 0, 00:17:08.216 "crdt3": 0 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "nvmf_create_transport", 00:17:08.216 "params": { 00:17:08.216 "trtype": "TCP", 00:17:08.216 "max_queue_depth": 128, 00:17:08.216 "max_io_qpairs_per_ctrlr": 127, 00:17:08.216 "in_capsule_data_size": 4096, 00:17:08.216 "max_io_size": 131072, 00:17:08.216 "io_unit_size": 131072, 00:17:08.216 "max_aq_depth": 128, 00:17:08.216 "num_shared_buffers": 511, 00:17:08.216 "buf_cache_size": 4294967295, 00:17:08.216 "dif_insert_or_strip": false, 00:17:08.216 "zcopy": false, 00:17:08.216 "c2h_success": false, 00:17:08.216 "sock_priority": 0, 00:17:08.216 "abort_timeout_sec": 1, 00:17:08.216 "ack_timeout": 0, 00:17:08.216 "data_wr_pool_size": 0 00:17:08.216 } 00:17:08.216 }, 00:17:08.216 { 00:17:08.216 "method": "nvmf_create_subsystem", 00:17:08.216 "params": { 00:17:08.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.216 "allow_any_host": false, 00:17:08.216 "serial_number": "SPDK00000000000001", 00:17:08.216 "model_number": "SPDK bdev Controller", 00:17:08.216 "max_namespaces": 10, 00:17:08.216 "min_cntlid": 1, 00:17:08.216 "max_cntlid": 65519, 00:17:08.217 "ana_reporting": false 00:17:08.217 } 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "method": "nvmf_subsystem_add_host", 00:17:08.217 "params": { 00:17:08.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.217 "host": "nqn.2016-06.io.spdk:host1", 00:17:08.217 "psk": "key0" 00:17:08.217 } 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "method": "nvmf_subsystem_add_ns", 00:17:08.217 "params": { 00:17:08.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.217 "namespace": { 00:17:08.217 "nsid": 1, 00:17:08.217 "bdev_name": "malloc0", 00:17:08.217 "nguid": "B2356FB69C3A4D1CAF728B5CD8E947D0", 00:17:08.217 "uuid": "b2356fb6-9c3a-4d1c-af72-8b5cd8e947d0", 00:17:08.217 "no_auto_visible": false 00:17:08.217 } 00:17:08.217 } 00:17:08.217 }, 00:17:08.217 { 00:17:08.217 "method": "nvmf_subsystem_add_listener", 00:17:08.217 "params": { 00:17:08.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.217 "listen_address": { 00:17:08.217 "trtype": "TCP", 00:17:08.217 "adrfam": "IPv4", 00:17:08.217 "traddr": "10.0.0.3", 00:17:08.217 "trsvcid": "4420" 00:17:08.217 }, 00:17:08.217 "secure_channel": true 00:17:08.217 } 00:17:08.217 } 00:17:08.217 ] 00:17:08.217 } 00:17:08.217 ] 00:17:08.217 }' 00:17:08.217 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:08.786 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:08.786 "subsystems": [ 00:17:08.786 { 00:17:08.786 "subsystem": "keyring", 00:17:08.786 "config": [ 00:17:08.786 { 00:17:08.786 "method": "keyring_file_add_key", 00:17:08.786 "params": { 00:17:08.786 "name": "key0", 00:17:08.786 "path": "/tmp/tmp.L9GucbBzHC" 00:17:08.786 } 00:17:08.786 } 00:17:08.786 ] 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "subsystem": "iobuf", 00:17:08.786 "config": [ 00:17:08.786 { 00:17:08.786 "method": "iobuf_set_options", 00:17:08.786 "params": { 00:17:08.786 "small_pool_count": 8192, 00:17:08.786 "large_pool_count": 1024, 00:17:08.786 "small_bufsize": 8192, 00:17:08.786 "large_bufsize": 135168, 00:17:08.786 "enable_numa": false 00:17:08.786 } 00:17:08.786 } 00:17:08.786 ] 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "subsystem": "sock", 00:17:08.786 "config": [ 00:17:08.786 { 00:17:08.786 "method": "sock_set_default_impl", 00:17:08.786 "params": { 00:17:08.786 "impl_name": "uring" 00:17:08.786 } 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "method": "sock_impl_set_options", 00:17:08.786 "params": { 00:17:08.786 "impl_name": "ssl", 00:17:08.786 "recv_buf_size": 4096, 00:17:08.786 "send_buf_size": 4096, 00:17:08.786 "enable_recv_pipe": true, 00:17:08.786 "enable_quickack": false, 00:17:08.786 "enable_placement_id": 0, 00:17:08.786 "enable_zerocopy_send_server": true, 00:17:08.786 "enable_zerocopy_send_client": false, 00:17:08.786 "zerocopy_threshold": 0, 00:17:08.786 "tls_version": 0, 00:17:08.786 "enable_ktls": false 00:17:08.786 } 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "method": "sock_impl_set_options", 00:17:08.786 "params": { 00:17:08.786 "impl_name": "posix", 00:17:08.786 "recv_buf_size": 2097152, 00:17:08.786 "send_buf_size": 2097152, 00:17:08.786 "enable_recv_pipe": true, 00:17:08.786 "enable_quickack": false, 00:17:08.786 "enable_placement_id": 0, 00:17:08.786 "enable_zerocopy_send_server": true, 00:17:08.786 "enable_zerocopy_send_client": false, 00:17:08.786 "zerocopy_threshold": 0, 00:17:08.786 "tls_version": 0, 00:17:08.786 "enable_ktls": false 00:17:08.786 } 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "method": "sock_impl_set_options", 00:17:08.786 "params": { 00:17:08.786 "impl_name": "uring", 00:17:08.786 "recv_buf_size": 2097152, 00:17:08.786 "send_buf_size": 2097152, 00:17:08.786 "enable_recv_pipe": true, 00:17:08.786 "enable_quickack": false, 00:17:08.786 "enable_placement_id": 0, 00:17:08.786 "enable_zerocopy_send_server": false, 00:17:08.786 "enable_zerocopy_send_client": false, 00:17:08.786 "zerocopy_threshold": 0, 00:17:08.786 "tls_version": 0, 00:17:08.786 "enable_ktls": false 00:17:08.786 } 00:17:08.786 } 00:17:08.786 ] 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "subsystem": "vmd", 00:17:08.786 "config": [] 00:17:08.786 }, 00:17:08.786 { 00:17:08.786 "subsystem": "accel", 00:17:08.786 "config": [ 00:17:08.786 { 00:17:08.786 "method": "accel_set_options", 00:17:08.786 "params": { 00:17:08.786 "small_cache_size": 128, 00:17:08.786 "large_cache_size": 16, 00:17:08.786 "task_count": 2048, 00:17:08.786 "sequence_count": 2048, 00:17:08.786 "buf_count": 2048 00:17:08.786 } 00:17:08.786 } 00:17:08.786 ] 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "subsystem": "bdev", 00:17:08.787 "config": [ 00:17:08.787 { 00:17:08.787 "method": "bdev_set_options", 00:17:08.787 "params": { 00:17:08.787 "bdev_io_pool_size": 65535, 00:17:08.787 "bdev_io_cache_size": 256, 00:17:08.787 "bdev_auto_examine": true, 00:17:08.787 "iobuf_small_cache_size": 128, 00:17:08.787 "iobuf_large_cache_size": 16 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_raid_set_options", 00:17:08.787 "params": { 00:17:08.787 "process_window_size_kb": 1024, 00:17:08.787 "process_max_bandwidth_mb_sec": 0 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_iscsi_set_options", 00:17:08.787 "params": { 00:17:08.787 "timeout_sec": 30 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_nvme_set_options", 00:17:08.787 "params": { 00:17:08.787 "action_on_timeout": "none", 00:17:08.787 "timeout_us": 0, 00:17:08.787 "timeout_admin_us": 0, 00:17:08.787 "keep_alive_timeout_ms": 10000, 00:17:08.787 "arbitration_burst": 0, 00:17:08.787 "low_priority_weight": 0, 00:17:08.787 "medium_priority_weight": 0, 00:17:08.787 "high_priority_weight": 0, 00:17:08.787 "nvme_adminq_poll_period_us": 10000, 00:17:08.787 "nvme_ioq_poll_period_us": 0, 00:17:08.787 "io_queue_requests": 512, 00:17:08.787 "delay_cmd_submit": true, 00:17:08.787 "transport_retry_count": 4, 00:17:08.787 "bdev_retry_count": 3, 00:17:08.787 "transport_ack_timeout": 0, 00:17:08.787 "ctrlr_loss_timeout_sec": 0, 00:17:08.787 "reconnect_delay_sec": 0, 00:17:08.787 "fast_io_fail_timeout_sec": 0, 00:17:08.787 "disable_auto_failback": false, 00:17:08.787 "generate_uuids": false, 00:17:08.787 "transport_tos": 0, 00:17:08.787 "nvme_error_stat": false, 00:17:08.787 "rdma_srq_size": 0, 00:17:08.787 "io_path_stat": false, 00:17:08.787 "allow_accel_sequence": false, 00:17:08.787 "rdma_max_cq_size": 0, 00:17:08.787 "rdma_cm_event_timeout_ms": 0, 00:17:08.787 "dhchap_digests": [ 00:17:08.787 "sha256", 00:17:08.787 "sha384", 00:17:08.787 "sha512" 00:17:08.787 ], 00:17:08.787 "dhchap_dhgroups": [ 00:17:08.787 "null", 00:17:08.787 "ffdhe2048", 00:17:08.787 "ffdhe3072", 00:17:08.787 "ffdhe4096", 00:17:08.787 "ffdhe6144", 00:17:08.787 "ffdhe8192" 00:17:08.787 ] 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_nvme_attach_controller", 00:17:08.787 "params": { 00:17:08.787 "name": "TLSTEST", 00:17:08.787 "trtype": "TCP", 00:17:08.787 "adrfam": "IPv4", 00:17:08.787 "traddr": "10.0.0.3", 00:17:08.787 "trsvcid": "4420", 00:17:08.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.787 "prchk_reftag": false, 00:17:08.787 "prchk_guard": false, 00:17:08.787 "ctrlr_loss_timeout_sec": 0, 00:17:08.787 "reconnect_delay_sec": 0, 00:17:08.787 "fast_io_fail_timeout_sec": 0, 00:17:08.787 "psk": "key0", 00:17:08.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.787 "hdgst": false, 00:17:08.787 "ddgst": false, 00:17:08.787 "multipath": "multipath" 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_nvme_set_hotplug", 00:17:08.787 "params": { 00:17:08.787 "period_us": 100000, 00:17:08.787 "enable": false 00:17:08.787 } 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "method": "bdev_wait_for_examine" 00:17:08.787 } 00:17:08.787 ] 00:17:08.787 }, 00:17:08.787 { 00:17:08.787 "subsystem": "nbd", 00:17:08.787 "config": [] 00:17:08.787 } 00:17:08.787 ] 00:17:08.787 }' 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 74851 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74851 ']' 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74851 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74851 00:17:08.787 killing process with pid 74851 00:17:08.787 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.787 00:17:08.787 Latency(us) 00:17:08.787 [2024-12-05T03:01:39.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.787 [2024-12-05T03:01:39.631Z] =================================================================================================================== 00:17:08.787 [2024-12-05T03:01:39.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74851' 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74851 00:17:08.787 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74851 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 74791 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74791 ']' 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74791 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74791 00:17:09.724 killing process with pid 74791 00:17:09.724 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.725 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.725 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74791' 00:17:09.725 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74791 00:17:09.725 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74791 00:17:10.664 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:10.664 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.664 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.664 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.664 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:10.664 "subsystems": [ 00:17:10.664 { 00:17:10.664 "subsystem": "keyring", 00:17:10.664 "config": [ 00:17:10.664 { 00:17:10.664 "method": "keyring_file_add_key", 00:17:10.664 "params": { 00:17:10.664 "name": "key0", 00:17:10.664 "path": "/tmp/tmp.L9GucbBzHC" 00:17:10.664 } 00:17:10.664 } 00:17:10.664 ] 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "subsystem": "iobuf", 00:17:10.664 "config": [ 00:17:10.664 { 00:17:10.664 "method": "iobuf_set_options", 00:17:10.664 "params": { 00:17:10.664 "small_pool_count": 8192, 00:17:10.664 "large_pool_count": 1024, 00:17:10.664 "small_bufsize": 8192, 00:17:10.664 "large_bufsize": 135168, 00:17:10.664 "enable_numa": false 00:17:10.664 } 00:17:10.664 } 00:17:10.664 ] 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "subsystem": "sock", 00:17:10.664 "config": [ 00:17:10.664 { 00:17:10.664 "method": "sock_set_default_impl", 00:17:10.664 "params": { 00:17:10.664 "impl_name": "uring" 00:17:10.664 } 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "method": "sock_impl_set_options", 00:17:10.664 "params": { 00:17:10.664 "impl_name": "ssl", 00:17:10.664 "recv_buf_size": 4096, 00:17:10.664 "send_buf_size": 4096, 00:17:10.664 "enable_recv_pipe": true, 00:17:10.664 "enable_quickack": false, 00:17:10.664 "enable_placement_id": 0, 00:17:10.664 "enable_zerocopy_send_server": true, 00:17:10.664 "enable_zerocopy_send_client": false, 00:17:10.664 "zerocopy_threshold": 0, 00:17:10.664 "tls_version": 0, 00:17:10.664 "enable_ktls": false 00:17:10.664 } 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "method": "sock_impl_set_options", 00:17:10.664 "params": { 00:17:10.664 "impl_name": "posix", 00:17:10.664 "recv_buf_size": 2097152, 00:17:10.664 "send_buf_size": 2097152, 00:17:10.664 "enable_recv_pipe": true, 00:17:10.664 "enable_quickack": false, 00:17:10.664 "enable_placement_id": 0, 00:17:10.664 "enable_zerocopy_send_server": true, 00:17:10.664 "enable_zerocopy_send_client": false, 00:17:10.664 "zerocopy_threshold": 0, 00:17:10.664 "tls_version": 0, 00:17:10.664 "enable_ktls": false 00:17:10.664 } 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "method": "sock_impl_set_options", 00:17:10.664 "params": { 00:17:10.664 "impl_name": "uring", 00:17:10.664 "recv_buf_size": 2097152, 00:17:10.664 "send_buf_size": 2097152, 00:17:10.664 "enable_recv_pipe": true, 00:17:10.664 "enable_quickack": false, 00:17:10.664 "enable_placement_id": 0, 00:17:10.664 "enable_zerocopy_send_server": false, 00:17:10.664 "enable_zerocopy_send_client": false, 00:17:10.664 "zerocopy_threshold": 0, 00:17:10.664 "tls_version": 0, 00:17:10.664 "enable_ktls": false 00:17:10.664 } 00:17:10.664 } 00:17:10.664 ] 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "subsystem": "vmd", 00:17:10.664 "config": [] 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "subsystem": "accel", 00:17:10.664 "config": [ 00:17:10.664 { 00:17:10.664 "method": "accel_set_options", 00:17:10.664 "params": { 00:17:10.664 "small_cache_size": 128, 00:17:10.664 "large_cache_size": 16, 00:17:10.664 "task_count": 2048, 00:17:10.664 "sequence_count": 2048, 00:17:10.664 "buf_count": 2048 00:17:10.664 } 00:17:10.664 } 00:17:10.664 ] 00:17:10.664 }, 00:17:10.664 { 00:17:10.664 "subsystem": "bdev", 00:17:10.664 "config": [ 00:17:10.664 { 00:17:10.664 "method": "bdev_set_options", 00:17:10.664 "params": { 00:17:10.664 "bdev_io_pool_size": 65535, 00:17:10.664 "bdev_io_cache_size": 256, 00:17:10.664 "bdev_auto_examine": true, 00:17:10.664 "iobuf_small_cache_size": 128, 00:17:10.664 "iobuf_large_cache_size": 16 00:17:10.664 } 00:17:10.664 }, 00:17:10.664 { 00:17:10.665 "method": "bdev_raid_set_options", 00:17:10.665 "params": { 00:17:10.665 "process_window_size_kb": 1024, 00:17:10.665 "process_max_bandwidth_mb_sec": 0 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "bdev_iscsi_set_options", 00:17:10.665 "params": { 00:17:10.665 "timeout_sec": 30 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "bdev_nvme_set_options", 00:17:10.665 "params": { 00:17:10.665 "action_on_timeout": "none", 00:17:10.665 "timeout_us": 0, 00:17:10.665 "timeout_admin_us": 0, 00:17:10.665 "keep_alive_timeout_ms": 10000, 00:17:10.665 "arbitration_burst": 0, 00:17:10.665 "low_priority_weight": 0, 00:17:10.665 "medium_priority_weight": 0, 00:17:10.665 "high_priority_weight": 0, 00:17:10.665 "nvme_adminq_poll_period_us": 10000, 00:17:10.665 "nvme_ioq_poll_period_us": 0, 00:17:10.665 "io_queue_requests": 0, 00:17:10.665 "delay_cmd_submit": true, 00:17:10.665 "transport_retry_count": 4, 00:17:10.665 "bdev_retry_count": 3, 00:17:10.665 "transport_ack_timeout": 0, 00:17:10.665 "ctrlr_loss_timeout_sec": 0, 00:17:10.665 "reconnect_delay_sec": 0, 00:17:10.665 "fast_io_fail_timeout_sec": 0, 00:17:10.665 "disable_auto_failback": false, 00:17:10.665 "generate_uuids": false, 00:17:10.665 "transport_tos": 0, 00:17:10.665 "nvme_error_stat": false, 00:17:10.665 "rdma_srq_size": 0, 00:17:10.665 "io_path_stat": false, 00:17:10.665 "allow_accel_sequence": false, 00:17:10.665 "rdma_max_cq_size": 0, 00:17:10.665 "rdma_cm_event_timeout_ms": 0, 00:17:10.665 "dhchap_digests": [ 00:17:10.665 "sha256", 00:17:10.665 "sha384", 00:17:10.665 "sha512" 00:17:10.665 ], 00:17:10.665 "dhchap_dhgroups": [ 00:17:10.665 "null", 00:17:10.665 "ffdhe2048", 00:17:10.665 "ffdhe3072", 00:17:10.665 "ffdhe4096", 00:17:10.665 "ffdhe6144", 00:17:10.665 "ffdhe8192" 00:17:10.665 ] 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "bdev_nvme_set_hotplug", 00:17:10.665 "params": { 00:17:10.665 "period_us": 100000, 00:17:10.665 "enable": false 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "bdev_malloc_create", 00:17:10.665 "params": { 00:17:10.665 "name": "malloc0", 00:17:10.665 "num_blocks": 8192, 00:17:10.665 "block_size": 4096, 00:17:10.665 "physical_block_size": 4096, 00:17:10.665 "uuid": "b2356fb6-9c3a-4d1c-af72-8b5cd8e947d0", 00:17:10.665 "optimal_io_boundary": 0, 00:17:10.665 "md_size": 0, 00:17:10.665 "dif_type": 0, 00:17:10.665 "dif_is_head_of_md": false, 00:17:10.665 "dif_pi_format": 0 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "bdev_wait_for_examine" 00:17:10.665 } 00:17:10.665 ] 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "subsystem": "nbd", 00:17:10.665 "config": [] 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "subsystem": "scheduler", 00:17:10.665 "config": [ 00:17:10.665 { 00:17:10.665 "method": "framework_set_scheduler", 00:17:10.665 "params": { 00:17:10.665 "name": "static" 00:17:10.665 } 00:17:10.665 } 00:17:10.665 ] 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "subsystem": "nvmf", 00:17:10.665 "config": [ 00:17:10.665 { 00:17:10.665 "method": "nvmf_set_config", 00:17:10.665 "params": { 00:17:10.665 "discovery_filter": "match_any", 00:17:10.665 "admin_cmd_passthru": { 00:17:10.665 "identify_ctrlr": false 00:17:10.665 }, 00:17:10.665 "dhchap_digests": [ 00:17:10.665 "sha256", 00:17:10.665 "sha384", 00:17:10.665 "sha512" 00:17:10.665 ], 00:17:10.665 "dhchap_dhgroups": [ 00:17:10.665 "null", 00:17:10.665 "ffdhe2048", 00:17:10.665 "ffdhe3072", 00:17:10.665 "ffdhe4096", 00:17:10.665 "ffdhe6144", 00:17:10.665 "ffdhe8192" 00:17:10.665 ] 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_set_max_subsystems", 00:17:10.665 "params": { 00:17:10.665 "max_subsystems": 1024 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_set_crdt", 00:17:10.665 "params": { 00:17:10.665 "crdt1": 0, 00:17:10.665 "crdt2": 0, 00:17:10.665 "crdt3": 0 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_create_transport", 00:17:10.665 "params": { 00:17:10.665 "trtype": "TCP", 00:17:10.665 "max_queue_depth": 128, 00:17:10.665 "max_io_qpairs_per_ctrlr": 127, 00:17:10.665 "in_capsule_data_size": 4096, 00:17:10.665 "max_io_size": 131072, 00:17:10.665 "io_unit_size": 131072, 00:17:10.665 "max_aq_depth": 128, 00:17:10.665 "num_shared_buffers": 511, 00:17:10.665 "buf_cache_size": 4294967295, 00:17:10.665 "dif_insert_or_strip": false, 00:17:10.665 "zcopy": false, 00:17:10.665 "c2h_success": false, 00:17:10.665 "sock_priority": 0, 00:17:10.665 "abort_timeout_sec": 1, 00:17:10.665 "ack_timeout": 0, 00:17:10.665 "data_wr_pool_size": 0 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_create_subsystem", 00:17:10.665 "params": { 00:17:10.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.665 "allow_any_host": false, 00:17:10.665 "serial_number": "SPDK00000000000001", 00:17:10.665 "model_number": "SPDK bdev Controller", 00:17:10.665 "max_namespaces": 10, 00:17:10.665 "min_cntlid": 1, 00:17:10.665 "max_cntlid": 65519, 00:17:10.665 "ana_reporting": false 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_subsystem_add_host", 00:17:10.665 "params": { 00:17:10.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.665 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.665 "psk": "key0" 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_subsystem_add_ns", 00:17:10.665 "params": { 00:17:10.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.665 "namespace": { 00:17:10.665 "nsid": 1, 00:17:10.665 "bdev_name": "malloc0", 00:17:10.665 "nguid": "B2356FB69C3A4D1CAF728B5CD8E947D0", 00:17:10.665 "uuid": "b2356fb6-9c3a-4d1c-af72-8b5cd8e947d0", 00:17:10.665 "no_auto_visible": false 00:17:10.665 } 00:17:10.665 } 00:17:10.665 }, 00:17:10.665 { 00:17:10.665 "method": "nvmf_subsystem_add_listener", 00:17:10.665 "params": { 00:17:10.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.665 "listen_address": { 00:17:10.665 "trtype": "TCP", 00:17:10.665 "adrfam": "IPv4", 00:17:10.665 "traddr": "10.0.0.3", 00:17:10.665 "trsvcid": "4420" 00:17:10.665 }, 00:17:10.665 "secure_channel": true 00:17:10.665 } 00:17:10.665 } 00:17:10.665 ] 00:17:10.665 } 00:17:10.665 ] 00:17:10.665 }' 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74915 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74915 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74915 ']' 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.665 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.665 [2024-12-05 03:01:41.351930] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:10.665 [2024-12-05 03:01:41.352110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.925 [2024-12-05 03:01:41.527262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.925 [2024-12-05 03:01:41.624249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.925 [2024-12-05 03:01:41.624325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.925 [2024-12-05 03:01:41.624359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.925 [2024-12-05 03:01:41.624380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.925 [2024-12-05 03:01:41.624394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.925 [2024-12-05 03:01:41.625611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.185 [2024-12-05 03:01:41.896705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.444 [2024-12-05 03:01:42.056129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.444 [2024-12-05 03:01:42.088086] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.444 [2024-12-05 03:01:42.088434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=74947 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 74947 /var/tmp/bdevperf.sock 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74947 ']' 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.444 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:11.444 "subsystems": [ 00:17:11.444 { 00:17:11.444 "subsystem": "keyring", 00:17:11.444 "config": [ 00:17:11.444 { 00:17:11.444 "method": "keyring_file_add_key", 00:17:11.444 "params": { 00:17:11.444 "name": "key0", 00:17:11.444 "path": "/tmp/tmp.L9GucbBzHC" 00:17:11.444 } 00:17:11.444 } 00:17:11.444 ] 00:17:11.444 }, 00:17:11.444 { 00:17:11.444 "subsystem": "iobuf", 00:17:11.444 "config": [ 00:17:11.444 { 00:17:11.444 "method": "iobuf_set_options", 00:17:11.444 "params": { 00:17:11.444 "small_pool_count": 8192, 00:17:11.445 "large_pool_count": 1024, 00:17:11.445 "small_bufsize": 8192, 00:17:11.445 "large_bufsize": 135168, 00:17:11.445 "enable_numa": false 00:17:11.445 } 00:17:11.445 } 00:17:11.445 ] 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "subsystem": "sock", 00:17:11.445 "config": [ 00:17:11.445 { 00:17:11.445 "method": "sock_set_default_impl", 00:17:11.445 "params": { 00:17:11.445 "impl_name": "uring" 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "sock_impl_set_options", 00:17:11.445 "params": { 00:17:11.445 "impl_name": "ssl", 00:17:11.445 "recv_buf_size": 4096, 00:17:11.445 "send_buf_size": 4096, 00:17:11.445 "enable_recv_pipe": true, 00:17:11.445 "enable_quickack": false, 00:17:11.445 "enable_placement_id": 0, 00:17:11.445 "enable_zerocopy_send_server": true, 00:17:11.445 "enable_zerocopy_send_client": false, 00:17:11.445 "zerocopy_threshold": 0, 00:17:11.445 "tls_version": 0, 00:17:11.445 "enable_ktls": false 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "sock_impl_set_options", 00:17:11.445 "params": { 00:17:11.445 "impl_name": "posix", 00:17:11.445 "recv_buf_size": 2097152, 00:17:11.445 "send_buf_size": 2097152, 00:17:11.445 "enable_recv_pipe": true, 00:17:11.445 "enable_quickack": false, 00:17:11.445 "enable_placement_id": 0, 00:17:11.445 "enable_zerocopy_send_server": true, 00:17:11.445 "enable_zerocopy_send_client": false, 00:17:11.445 "zerocopy_threshold": 0, 00:17:11.445 "tls_version": 0, 00:17:11.445 "enable_ktls": false 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "sock_impl_set_options", 00:17:11.445 "params": { 00:17:11.445 "impl_name": "uring", 00:17:11.445 "recv_buf_size": 2097152, 00:17:11.445 "send_buf_size": 2097152, 00:17:11.445 "enable_recv_pipe": true, 00:17:11.445 "enable_quickack": false, 00:17:11.445 "enable_placement_id": 0, 00:17:11.445 "enable_zerocopy_send_server": false, 00:17:11.445 "enable_zerocopy_send_client": false, 00:17:11.445 "zerocopy_threshold": 0, 00:17:11.445 "tls_version": 0, 00:17:11.445 "enable_ktls": false 00:17:11.445 } 00:17:11.445 } 00:17:11.445 ] 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "subsystem": "vmd", 00:17:11.445 "config": [] 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "subsystem": "accel", 00:17:11.445 "config": [ 00:17:11.445 { 00:17:11.445 "method": "accel_set_options", 00:17:11.445 "params": { 00:17:11.445 "small_cache_size": 128, 00:17:11.445 "large_cache_size": 16, 00:17:11.445 "task_count": 2048, 00:17:11.445 "sequence_count": 2048, 00:17:11.445 "buf_count": 2048 00:17:11.445 } 00:17:11.445 } 00:17:11.445 ] 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "subsystem": "bdev", 00:17:11.445 "config": [ 00:17:11.445 { 00:17:11.445 "method": "bdev_set_options", 00:17:11.445 "params": { 00:17:11.445 "bdev_io_pool_size": 65535, 00:17:11.445 "bdev_io_cache_size": 256, 00:17:11.445 "bdev_auto_examine": true, 00:17:11.445 "iobuf_small_cache_size": 128, 00:17:11.445 "iobuf_large_cache_size": 16 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_raid_set_options", 00:17:11.445 "params": { 00:17:11.445 "process_window_size_kb": 1024, 00:17:11.445 "process_max_bandwidth_mb_sec": 0 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_iscsi_set_options", 00:17:11.445 "params": { 00:17:11.445 "timeout_sec": 30 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_nvme_set_options", 00:17:11.445 "params": { 00:17:11.445 "action_on_timeout": "none", 00:17:11.445 "timeout_us": 0, 00:17:11.445 "timeout_admin_us": 0, 00:17:11.445 "keep_alive_timeout_ms": 10000, 00:17:11.445 "arbitration_burst": 0, 00:17:11.445 "low_priority_weight": 0, 00:17:11.445 "medium_priority_weight": 0, 00:17:11.445 "high_priority_weight": 0, 00:17:11.445 "nvme_adminq_poll_period_us": 10000, 00:17:11.445 "nvme_ioq_poll_period_us": 0, 00:17:11.445 "io_queue_requests": 512, 00:17:11.445 "delay_cmd_submit": true, 00:17:11.445 "transport_retry_count": 4, 00:17:11.445 "bdev_retry_count": 3, 00:17:11.445 "transport_ack_timeout": 0, 00:17:11.445 "ctrlr_loss_timeout_sec": 0, 00:17:11.445 "reconnect_delay_sec": 0, 00:17:11.445 "fast_io_fail_timeout_sec": 0, 00:17:11.445 "disable_auto_failback": false, 00:17:11.445 "generate_uuids": false, 00:17:11.445 "transport_tos": 0, 00:17:11.445 "nvme_error_stat": false, 00:17:11.445 "rdma_srq_size": 0, 00:17:11.445 "io_path_stat": false, 00:17:11.445 "allow_accel_sequence": false, 00:17:11.445 "rdma_max_cq_size": 0, 00:17:11.445 "rdma_cm_event_timeout_ms": 0, 00:17:11.445 "dhchap_digests": [ 00:17:11.445 "sha256", 00:17:11.445 "sha384", 00:17:11.445 "sha512" 00:17:11.445 ], 00:17:11.445 "dhchap_dhgroups": [ 00:17:11.445 "null", 00:17:11.445 "ffdhe2048", 00:17:11.445 "ffdhe3072", 00:17:11.445 "ffdhe4096", 00:17:11.445 "ffdhe6144", 00:17:11.445 "ffdhe8192" 00:17:11.445 ] 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_nvme_attach_controller", 00:17:11.445 "params": { 00:17:11.445 "name": "TLSTEST", 00:17:11.445 "trtype": "TCP", 00:17:11.445 "adrfam": "IPv4", 00:17:11.445 "traddr": "10.0.0.3", 00:17:11.445 "trsvcid": "4420", 00:17:11.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.445 "prchk_reftag": false, 00:17:11.445 "prchk_guard": false, 00:17:11.445 "ctrlr_loss_timeout_sec": 0, 00:17:11.445 "reconnect_delay_sec": 0, 00:17:11.445 "fast_io_fail_timeout_sec": 0, 00:17:11.445 "psk": "key0", 00:17:11.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.445 "hdgst": false, 00:17:11.445 "ddgst": false, 00:17:11.445 "multipath": "multipath" 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_nvme_set_hotplug", 00:17:11.445 "params": { 00:17:11.445 "period_us": 100000, 00:17:11.445 "enable": false 00:17:11.445 } 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "method": "bdev_wait_for_examine" 00:17:11.445 } 00:17:11.445 ] 00:17:11.445 }, 00:17:11.445 { 00:17:11.445 "subsystem": "nbd", 00:17:11.445 "config": [] 00:17:11.445 } 00:17:11.445 ] 00:17:11.445 }' 00:17:11.445 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:11.445 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.704 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.704 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.704 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.704 [2024-12-05 03:01:42.371041] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:11.705 [2024-12-05 03:01:42.371219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:17:11.705 [2024-12-05 03:01:42.546341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.964 [2024-12-05 03:01:42.672929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.223 [2024-12-05 03:01:42.934525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:12.223 [2024-12-05 03:01:43.050526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.792 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.792 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:12.792 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:12.792 Running I/O for 10 seconds... 00:17:14.667 2845.00 IOPS, 11.11 MiB/s [2024-12-05T03:01:46.886Z] 2971.00 IOPS, 11.61 MiB/s [2024-12-05T03:01:47.454Z] 2986.67 IOPS, 11.67 MiB/s [2024-12-05T03:01:48.832Z] 2976.00 IOPS, 11.62 MiB/s [2024-12-05T03:01:49.769Z] 2957.20 IOPS, 11.55 MiB/s [2024-12-05T03:01:50.706Z] 2947.00 IOPS, 11.51 MiB/s [2024-12-05T03:01:51.643Z] 2944.00 IOPS, 11.50 MiB/s [2024-12-05T03:01:52.594Z] 2931.00 IOPS, 11.45 MiB/s [2024-12-05T03:01:53.546Z] 2929.78 IOPS, 11.44 MiB/s [2024-12-05T03:01:53.546Z] 2923.90 IOPS, 11.42 MiB/s 00:17:22.702 Latency(us) 00:17:22.702 [2024-12-05T03:01:53.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:22.702 Verification LBA range: start 0x0 length 0x2000 00:17:22.702 TLSTESTn1 : 10.03 2926.72 11.43 0.00 0.00 43619.83 7328.12 27167.65 00:17:22.702 [2024-12-05T03:01:53.546Z] =================================================================================================================== 00:17:22.702 [2024-12-05T03:01:53.546Z] Total : 2926.72 11.43 0.00 0.00 43619.83 7328.12 27167.65 00:17:22.702 { 00:17:22.702 "results": [ 00:17:22.702 { 00:17:22.702 "job": "TLSTESTn1", 00:17:22.702 "core_mask": "0x4", 00:17:22.702 "workload": "verify", 00:17:22.702 "status": "finished", 00:17:22.702 "verify_range": { 00:17:22.702 "start": 0, 00:17:22.702 "length": 8192 00:17:22.702 }, 00:17:22.702 "queue_depth": 128, 00:17:22.702 "io_size": 4096, 00:17:22.702 "runtime": 10.033069, 00:17:22.702 "iops": 2926.7216242607324, 00:17:22.702 "mibps": 11.432506344768486, 00:17:22.702 "io_failed": 0, 00:17:22.702 "io_timeout": 0, 00:17:22.702 "avg_latency_us": 43619.8323597231, 00:17:22.702 "min_latency_us": 7328.1163636363635, 00:17:22.702 "max_latency_us": 27167.65090909091 00:17:22.702 } 00:17:22.702 ], 00:17:22.702 "core_count": 1 00:17:22.702 } 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 74947 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74947 ']' 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74947 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74947 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:22.702 killing process with pid 74947 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74947' 00:17:22.702 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.702 00:17:22.702 Latency(us) 00:17:22.702 [2024-12-05T03:01:53.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.702 [2024-12-05T03:01:53.546Z] =================================================================================================================== 00:17:22.702 [2024-12-05T03:01:53.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74947 00:17:22.702 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74947 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 74915 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74915 ']' 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74915 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.080 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74915 00:17:24.081 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.081 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.081 killing process with pid 74915 00:17:24.081 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74915' 00:17:24.081 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74915 00:17:24.081 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74915 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75099 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75099 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75099 ']' 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.016 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.016 [2024-12-05 03:01:55.714695] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:25.016 [2024-12-05 03:01:55.714869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.275 [2024-12-05 03:01:55.894851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.275 [2024-12-05 03:01:56.024266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.275 [2024-12-05 03:01:56.024395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.275 [2024-12-05 03:01:56.024432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.275 [2024-12-05 03:01:56.024460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.275 [2024-12-05 03:01:56.024476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.275 [2024-12-05 03:01:56.026029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.534 [2024-12-05 03:01:56.204015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.L9GucbBzHC 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9GucbBzHC 00:17:26.104 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.363 [2024-12-05 03:01:56.959814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.363 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.622 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:26.881 [2024-12-05 03:01:57.512039] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.881 [2024-12-05 03:01:57.512493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.881 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.140 malloc0 00:17:27.140 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.400 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:27.660 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:27.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75160 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75160 /var/tmp/bdevperf.sock 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75160 ']' 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.920 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 [2024-12-05 03:01:58.700541] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:27.920 [2024-12-05 03:01:58.700700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:17:28.180 [2024-12-05 03:01:58.871877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.180 [2024-12-05 03:01:58.968415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.440 [2024-12-05 03:01:59.147020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.008 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.009 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:29.009 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:29.268 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:29.527 [2024-12-05 03:02:00.185247] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.527 nvme0n1 00:17:29.527 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:29.786 Running I/O for 1 seconds... 00:17:30.721 2827.00 IOPS, 11.04 MiB/s 00:17:30.721 Latency(us) 00:17:30.721 [2024-12-05T03:02:01.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.721 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:30.721 Verification LBA range: start 0x0 length 0x2000 00:17:30.721 nvme0n1 : 1.03 2875.78 11.23 0.00 0.00 43783.90 1846.92 26214.40 00:17:30.721 [2024-12-05T03:02:01.565Z] =================================================================================================================== 00:17:30.721 [2024-12-05T03:02:01.565Z] Total : 2875.78 11.23 0.00 0.00 43783.90 1846.92 26214.40 00:17:30.721 { 00:17:30.721 "results": [ 00:17:30.721 { 00:17:30.721 "job": "nvme0n1", 00:17:30.721 "core_mask": "0x2", 00:17:30.721 "workload": "verify", 00:17:30.721 "status": "finished", 00:17:30.721 "verify_range": { 00:17:30.721 "start": 0, 00:17:30.721 "length": 8192 00:17:30.721 }, 00:17:30.721 "queue_depth": 128, 00:17:30.721 "io_size": 4096, 00:17:30.721 "runtime": 1.027547, 00:17:30.721 "iops": 2875.7808645249315, 00:17:30.721 "mibps": 11.233519002050514, 00:17:30.721 "io_failed": 0, 00:17:30.721 "io_timeout": 0, 00:17:30.721 "avg_latency_us": 43783.901228734045, 00:17:30.721 "min_latency_us": 1846.9236363636364, 00:17:30.721 "max_latency_us": 26214.4 00:17:30.721 } 00:17:30.721 ], 00:17:30.721 "core_count": 1 00:17:30.721 } 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75160 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75160 ']' 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75160 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75160 00:17:30.721 killing process with pid 75160 00:17:30.721 Received shutdown signal, test time was about 1.000000 seconds 00:17:30.721 00:17:30.721 Latency(us) 00:17:30.721 [2024-12-05T03:02:01.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.721 [2024-12-05T03:02:01.565Z] =================================================================================================================== 00:17:30.721 [2024-12-05T03:02:01.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75160' 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75160 00:17:30.721 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75160 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75099 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75099 ']' 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75099 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75099 00:17:31.658 killing process with pid 75099 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75099' 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75099 00:17:31.658 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75099 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75230 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75230 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75230 ']' 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.039 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.039 [2024-12-05 03:02:03.561317] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:33.039 [2024-12-05 03:02:03.561498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.039 [2024-12-05 03:02:03.730106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.039 [2024-12-05 03:02:03.829079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.039 [2024-12-05 03:02:03.829161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.039 [2024-12-05 03:02:03.829182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.039 [2024-12-05 03:02:03.829207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.039 [2024-12-05 03:02:03.829221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.039 [2024-12-05 03:02:03.830487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.298 [2024-12-05 03:02:03.994482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.894 [2024-12-05 03:02:04.543336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.894 malloc0 00:17:33.894 [2024-12-05 03:02:04.589024] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.894 [2024-12-05 03:02:04.589337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75262 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75262 /var/tmp/bdevperf.sock 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75262 ']' 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.894 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.894 [2024-12-05 03:02:04.729727] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:33.894 [2024-12-05 03:02:04.729923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75262 ] 00:17:34.152 [2024-12-05 03:02:04.904837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.152 [2024-12-05 03:02:04.985422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.410 [2024-12-05 03:02:05.146435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.975 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.975 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.975 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9GucbBzHC 00:17:35.233 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:35.490 [2024-12-05 03:02:06.183588] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.490 nvme0n1 00:17:35.490 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.748 Running I/O for 1 seconds... 00:17:36.684 3334.00 IOPS, 13.02 MiB/s 00:17:36.684 Latency(us) 00:17:36.684 [2024-12-05T03:02:07.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.684 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.684 Verification LBA range: start 0x0 length 0x2000 00:17:36.684 nvme0n1 : 1.02 3384.38 13.22 0.00 0.00 37295.19 1057.51 22997.18 00:17:36.684 [2024-12-05T03:02:07.528Z] =================================================================================================================== 00:17:36.684 [2024-12-05T03:02:07.528Z] Total : 3384.38 13.22 0.00 0.00 37295.19 1057.51 22997.18 00:17:36.684 { 00:17:36.684 "results": [ 00:17:36.684 { 00:17:36.684 "job": "nvme0n1", 00:17:36.684 "core_mask": "0x2", 00:17:36.684 "workload": "verify", 00:17:36.684 "status": "finished", 00:17:36.684 "verify_range": { 00:17:36.684 "start": 0, 00:17:36.684 "length": 8192 00:17:36.684 }, 00:17:36.684 "queue_depth": 128, 00:17:36.684 "io_size": 4096, 00:17:36.684 "runtime": 1.02323, 00:17:36.684 "iops": 3384.380833243748, 00:17:36.684 "mibps": 13.22023762985839, 00:17:36.684 "io_failed": 0, 00:17:36.684 "io_timeout": 0, 00:17:36.684 "avg_latency_us": 37295.19451027748, 00:17:36.684 "min_latency_us": 1057.5127272727273, 00:17:36.684 "max_latency_us": 22997.17818181818 00:17:36.684 } 00:17:36.684 ], 00:17:36.684 "core_count": 1 00:17:36.684 } 00:17:36.684 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:36.684 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.684 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.978 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:36.978 "subsystems": [ 00:17:36.978 { 00:17:36.978 "subsystem": "keyring", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "keyring_file_add_key", 00:17:36.978 "params": { 00:17:36.978 "name": "key0", 00:17:36.978 "path": "/tmp/tmp.L9GucbBzHC" 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "iobuf", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "iobuf_set_options", 00:17:36.978 "params": { 00:17:36.978 "small_pool_count": 8192, 00:17:36.978 "large_pool_count": 1024, 00:17:36.978 "small_bufsize": 8192, 00:17:36.978 "large_bufsize": 135168, 00:17:36.978 "enable_numa": false 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "sock", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "sock_set_default_impl", 00:17:36.978 "params": { 00:17:36.978 "impl_name": "uring" 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "sock_impl_set_options", 00:17:36.978 "params": { 00:17:36.978 "impl_name": "ssl", 00:17:36.978 "recv_buf_size": 4096, 00:17:36.978 "send_buf_size": 4096, 00:17:36.978 "enable_recv_pipe": true, 00:17:36.978 "enable_quickack": false, 00:17:36.978 "enable_placement_id": 0, 00:17:36.978 "enable_zerocopy_send_server": true, 00:17:36.978 "enable_zerocopy_send_client": false, 00:17:36.978 "zerocopy_threshold": 0, 00:17:36.978 "tls_version": 0, 00:17:36.978 "enable_ktls": false 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "sock_impl_set_options", 00:17:36.978 "params": { 00:17:36.978 "impl_name": "posix", 00:17:36.978 "recv_buf_size": 2097152, 00:17:36.978 "send_buf_size": 2097152, 00:17:36.978 "enable_recv_pipe": true, 00:17:36.978 "enable_quickack": false, 00:17:36.978 "enable_placement_id": 0, 00:17:36.978 "enable_zerocopy_send_server": true, 00:17:36.978 "enable_zerocopy_send_client": false, 00:17:36.978 "zerocopy_threshold": 0, 00:17:36.978 "tls_version": 0, 00:17:36.978 "enable_ktls": false 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "sock_impl_set_options", 00:17:36.978 "params": { 00:17:36.978 "impl_name": "uring", 00:17:36.978 "recv_buf_size": 2097152, 00:17:36.978 "send_buf_size": 2097152, 00:17:36.978 "enable_recv_pipe": true, 00:17:36.978 "enable_quickack": false, 00:17:36.978 "enable_placement_id": 0, 00:17:36.978 "enable_zerocopy_send_server": false, 00:17:36.978 "enable_zerocopy_send_client": false, 00:17:36.978 "zerocopy_threshold": 0, 00:17:36.978 "tls_version": 0, 00:17:36.978 "enable_ktls": false 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "vmd", 00:17:36.978 "config": [] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "accel", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "accel_set_options", 00:17:36.978 "params": { 00:17:36.978 "small_cache_size": 128, 00:17:36.978 "large_cache_size": 16, 00:17:36.978 "task_count": 2048, 00:17:36.978 "sequence_count": 2048, 00:17:36.978 "buf_count": 2048 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "bdev", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "bdev_set_options", 00:17:36.978 "params": { 00:17:36.978 "bdev_io_pool_size": 65535, 00:17:36.978 "bdev_io_cache_size": 256, 00:17:36.978 "bdev_auto_examine": true, 00:17:36.978 "iobuf_small_cache_size": 128, 00:17:36.978 "iobuf_large_cache_size": 16 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_raid_set_options", 00:17:36.978 "params": { 00:17:36.978 "process_window_size_kb": 1024, 00:17:36.978 "process_max_bandwidth_mb_sec": 0 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_iscsi_set_options", 00:17:36.978 "params": { 00:17:36.978 "timeout_sec": 30 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_nvme_set_options", 00:17:36.978 "params": { 00:17:36.978 "action_on_timeout": "none", 00:17:36.978 "timeout_us": 0, 00:17:36.978 "timeout_admin_us": 0, 00:17:36.978 "keep_alive_timeout_ms": 10000, 00:17:36.978 "arbitration_burst": 0, 00:17:36.978 "low_priority_weight": 0, 00:17:36.978 "medium_priority_weight": 0, 00:17:36.978 "high_priority_weight": 0, 00:17:36.978 "nvme_adminq_poll_period_us": 10000, 00:17:36.978 "nvme_ioq_poll_period_us": 0, 00:17:36.978 "io_queue_requests": 0, 00:17:36.978 "delay_cmd_submit": true, 00:17:36.978 "transport_retry_count": 4, 00:17:36.978 "bdev_retry_count": 3, 00:17:36.978 "transport_ack_timeout": 0, 00:17:36.978 "ctrlr_loss_timeout_sec": 0, 00:17:36.978 "reconnect_delay_sec": 0, 00:17:36.978 "fast_io_fail_timeout_sec": 0, 00:17:36.978 "disable_auto_failback": false, 00:17:36.978 "generate_uuids": false, 00:17:36.978 "transport_tos": 0, 00:17:36.978 "nvme_error_stat": false, 00:17:36.978 "rdma_srq_size": 0, 00:17:36.978 "io_path_stat": false, 00:17:36.978 "allow_accel_sequence": false, 00:17:36.978 "rdma_max_cq_size": 0, 00:17:36.978 "rdma_cm_event_timeout_ms": 0, 00:17:36.978 "dhchap_digests": [ 00:17:36.978 "sha256", 00:17:36.978 "sha384", 00:17:36.978 "sha512" 00:17:36.978 ], 00:17:36.978 "dhchap_dhgroups": [ 00:17:36.978 "null", 00:17:36.978 "ffdhe2048", 00:17:36.978 "ffdhe3072", 00:17:36.978 "ffdhe4096", 00:17:36.978 "ffdhe6144", 00:17:36.978 "ffdhe8192" 00:17:36.978 ] 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_nvme_set_hotplug", 00:17:36.978 "params": { 00:17:36.978 "period_us": 100000, 00:17:36.978 "enable": false 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_malloc_create", 00:17:36.978 "params": { 00:17:36.978 "name": "malloc0", 00:17:36.978 "num_blocks": 8192, 00:17:36.978 "block_size": 4096, 00:17:36.978 "physical_block_size": 4096, 00:17:36.978 "uuid": "7939427c-084e-4ce8-88bd-2a7bdd289a83", 00:17:36.978 "optimal_io_boundary": 0, 00:17:36.978 "md_size": 0, 00:17:36.978 "dif_type": 0, 00:17:36.978 "dif_is_head_of_md": false, 00:17:36.978 "dif_pi_format": 0 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "bdev_wait_for_examine" 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "nbd", 00:17:36.978 "config": [] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "scheduler", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "framework_set_scheduler", 00:17:36.978 "params": { 00:17:36.978 "name": "static" 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "subsystem": "nvmf", 00:17:36.978 "config": [ 00:17:36.978 { 00:17:36.978 "method": "nvmf_set_config", 00:17:36.978 "params": { 00:17:36.978 "discovery_filter": "match_any", 00:17:36.978 "admin_cmd_passthru": { 00:17:36.978 "identify_ctrlr": false 00:17:36.978 }, 00:17:36.978 "dhchap_digests": [ 00:17:36.978 "sha256", 00:17:36.978 "sha384", 00:17:36.978 "sha512" 00:17:36.978 ], 00:17:36.978 "dhchap_dhgroups": [ 00:17:36.978 "null", 00:17:36.978 "ffdhe2048", 00:17:36.978 "ffdhe3072", 00:17:36.978 "ffdhe4096", 00:17:36.978 "ffdhe6144", 00:17:36.978 "ffdhe8192" 00:17:36.978 ] 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_set_max_subsystems", 00:17:36.978 "params": { 00:17:36.978 "max_subsystems": 1024 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_set_crdt", 00:17:36.978 "params": { 00:17:36.978 "crdt1": 0, 00:17:36.978 "crdt2": 0, 00:17:36.978 "crdt3": 0 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_create_transport", 00:17:36.978 "params": { 00:17:36.978 "trtype": "TCP", 00:17:36.978 "max_queue_depth": 128, 00:17:36.978 "max_io_qpairs_per_ctrlr": 127, 00:17:36.978 "in_capsule_data_size": 4096, 00:17:36.978 "max_io_size": 131072, 00:17:36.978 "io_unit_size": 131072, 00:17:36.978 "max_aq_depth": 128, 00:17:36.978 "num_shared_buffers": 511, 00:17:36.978 "buf_cache_size": 4294967295, 00:17:36.978 "dif_insert_or_strip": false, 00:17:36.978 "zcopy": false, 00:17:36.978 "c2h_success": false, 00:17:36.978 "sock_priority": 0, 00:17:36.978 "abort_timeout_sec": 1, 00:17:36.978 "ack_timeout": 0, 00:17:36.978 "data_wr_pool_size": 0 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_create_subsystem", 00:17:36.978 "params": { 00:17:36.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.978 "allow_any_host": false, 00:17:36.978 "serial_number": "00000000000000000000", 00:17:36.978 "model_number": "SPDK bdev Controller", 00:17:36.978 "max_namespaces": 32, 00:17:36.978 "min_cntlid": 1, 00:17:36.978 "max_cntlid": 65519, 00:17:36.978 "ana_reporting": false 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_subsystem_add_host", 00:17:36.978 "params": { 00:17:36.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.978 "host": "nqn.2016-06.io.spdk:host1", 00:17:36.978 "psk": "key0" 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_subsystem_add_ns", 00:17:36.978 "params": { 00:17:36.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.978 "namespace": { 00:17:36.978 "nsid": 1, 00:17:36.978 "bdev_name": "malloc0", 00:17:36.978 "nguid": "7939427C084E4CE888BD2A7BDD289A83", 00:17:36.978 "uuid": "7939427c-084e-4ce8-88bd-2a7bdd289a83", 00:17:36.978 "no_auto_visible": false 00:17:36.978 } 00:17:36.978 } 00:17:36.978 }, 00:17:36.978 { 00:17:36.978 "method": "nvmf_subsystem_add_listener", 00:17:36.978 "params": { 00:17:36.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.978 "listen_address": { 00:17:36.978 "trtype": "TCP", 00:17:36.978 "adrfam": "IPv4", 00:17:36.978 "traddr": "10.0.0.3", 00:17:36.978 "trsvcid": "4420" 00:17:36.978 }, 00:17:36.978 "secure_channel": false, 00:17:36.978 "sock_impl": "ssl" 00:17:36.978 } 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 } 00:17:36.978 ] 00:17:36.978 }' 00:17:36.978 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:37.263 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:37.263 "subsystems": [ 00:17:37.263 { 00:17:37.263 "subsystem": "keyring", 00:17:37.263 "config": [ 00:17:37.263 { 00:17:37.263 "method": "keyring_file_add_key", 00:17:37.263 "params": { 00:17:37.263 "name": "key0", 00:17:37.263 "path": "/tmp/tmp.L9GucbBzHC" 00:17:37.263 } 00:17:37.263 } 00:17:37.263 ] 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "subsystem": "iobuf", 00:17:37.263 "config": [ 00:17:37.263 { 00:17:37.263 "method": "iobuf_set_options", 00:17:37.263 "params": { 00:17:37.263 "small_pool_count": 8192, 00:17:37.263 "large_pool_count": 1024, 00:17:37.263 "small_bufsize": 8192, 00:17:37.263 "large_bufsize": 135168, 00:17:37.263 "enable_numa": false 00:17:37.263 } 00:17:37.263 } 00:17:37.263 ] 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "subsystem": "sock", 00:17:37.263 "config": [ 00:17:37.263 { 00:17:37.263 "method": "sock_set_default_impl", 00:17:37.263 "params": { 00:17:37.263 "impl_name": "uring" 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "sock_impl_set_options", 00:17:37.263 "params": { 00:17:37.263 "impl_name": "ssl", 00:17:37.263 "recv_buf_size": 4096, 00:17:37.263 "send_buf_size": 4096, 00:17:37.263 "enable_recv_pipe": true, 00:17:37.263 "enable_quickack": false, 00:17:37.263 "enable_placement_id": 0, 00:17:37.263 "enable_zerocopy_send_server": true, 00:17:37.263 "enable_zerocopy_send_client": false, 00:17:37.263 "zerocopy_threshold": 0, 00:17:37.263 "tls_version": 0, 00:17:37.263 "enable_ktls": false 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "sock_impl_set_options", 00:17:37.263 "params": { 00:17:37.263 "impl_name": "posix", 00:17:37.263 "recv_buf_size": 2097152, 00:17:37.263 "send_buf_size": 2097152, 00:17:37.263 "enable_recv_pipe": true, 00:17:37.263 "enable_quickack": false, 00:17:37.263 "enable_placement_id": 0, 00:17:37.263 "enable_zerocopy_send_server": true, 00:17:37.263 "enable_zerocopy_send_client": false, 00:17:37.263 "zerocopy_threshold": 0, 00:17:37.263 "tls_version": 0, 00:17:37.263 "enable_ktls": false 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "sock_impl_set_options", 00:17:37.263 "params": { 00:17:37.263 "impl_name": "uring", 00:17:37.263 "recv_buf_size": 2097152, 00:17:37.263 "send_buf_size": 2097152, 00:17:37.263 "enable_recv_pipe": true, 00:17:37.263 "enable_quickack": false, 00:17:37.263 "enable_placement_id": 0, 00:17:37.263 "enable_zerocopy_send_server": false, 00:17:37.263 "enable_zerocopy_send_client": false, 00:17:37.263 "zerocopy_threshold": 0, 00:17:37.263 "tls_version": 0, 00:17:37.263 "enable_ktls": false 00:17:37.263 } 00:17:37.263 } 00:17:37.263 ] 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "subsystem": "vmd", 00:17:37.263 "config": [] 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "subsystem": "accel", 00:17:37.263 "config": [ 00:17:37.263 { 00:17:37.263 "method": "accel_set_options", 00:17:37.263 "params": { 00:17:37.263 "small_cache_size": 128, 00:17:37.263 "large_cache_size": 16, 00:17:37.263 "task_count": 2048, 00:17:37.263 "sequence_count": 2048, 00:17:37.263 "buf_count": 2048 00:17:37.263 } 00:17:37.263 } 00:17:37.263 ] 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "subsystem": "bdev", 00:17:37.263 "config": [ 00:17:37.263 { 00:17:37.263 "method": "bdev_set_options", 00:17:37.263 "params": { 00:17:37.263 "bdev_io_pool_size": 65535, 00:17:37.263 "bdev_io_cache_size": 256, 00:17:37.263 "bdev_auto_examine": true, 00:17:37.263 "iobuf_small_cache_size": 128, 00:17:37.263 "iobuf_large_cache_size": 16 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "bdev_raid_set_options", 00:17:37.263 "params": { 00:17:37.263 "process_window_size_kb": 1024, 00:17:37.263 "process_max_bandwidth_mb_sec": 0 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "bdev_iscsi_set_options", 00:17:37.263 "params": { 00:17:37.263 "timeout_sec": 30 00:17:37.263 } 00:17:37.263 }, 00:17:37.263 { 00:17:37.263 "method": "bdev_nvme_set_options", 00:17:37.263 "params": { 00:17:37.263 "action_on_timeout": "none", 00:17:37.263 "timeout_us": 0, 00:17:37.263 "timeout_admin_us": 0, 00:17:37.263 "keep_alive_timeout_ms": 10000, 00:17:37.263 "arbitration_burst": 0, 00:17:37.263 "low_priority_weight": 0, 00:17:37.263 "medium_priority_weight": 0, 00:17:37.263 "high_priority_weight": 0, 00:17:37.263 "nvme_adminq_poll_period_us": 10000, 00:17:37.263 "nvme_ioq_poll_period_us": 0, 00:17:37.263 "io_queue_requests": 512, 00:17:37.263 "delay_cmd_submit": true, 00:17:37.263 "transport_retry_count": 4, 00:17:37.263 "bdev_retry_count": 3, 00:17:37.263 "transport_ack_timeout": 0, 00:17:37.263 "ctrlr_loss_timeout_sec": 0, 00:17:37.263 "reconnect_delay_sec": 0, 00:17:37.263 "fast_io_fail_timeout_sec": 0, 00:17:37.263 "disable_auto_failback": false, 00:17:37.263 "generate_uuids": false, 00:17:37.263 "transport_tos": 0, 00:17:37.263 "nvme_error_stat": false, 00:17:37.264 "rdma_srq_size": 0, 00:17:37.264 "io_path_stat": false, 00:17:37.264 "allow_accel_sequence": false, 00:17:37.264 "rdma_max_cq_size": 0, 00:17:37.264 "rdma_cm_event_timeout_ms": 0, 00:17:37.264 "dhchap_digests": [ 00:17:37.264 "sha256", 00:17:37.264 "sha384", 00:17:37.264 "sha512" 00:17:37.264 ], 00:17:37.264 "dhchap_dhgroups": [ 00:17:37.264 "null", 00:17:37.264 "ffdhe2048", 00:17:37.264 "ffdhe3072", 00:17:37.264 "ffdhe4096", 00:17:37.264 "ffdhe6144", 00:17:37.264 "ffdhe8192" 00:17:37.264 ] 00:17:37.264 } 00:17:37.264 }, 00:17:37.264 { 00:17:37.264 "method": "bdev_nvme_attach_controller", 00:17:37.264 "params": { 00:17:37.264 "name": "nvme0", 00:17:37.264 "trtype": "TCP", 00:17:37.264 "adrfam": "IPv4", 00:17:37.264 "traddr": "10.0.0.3", 00:17:37.264 "trsvcid": "4420", 00:17:37.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.264 "prchk_reftag": false, 00:17:37.264 "prchk_guard": false, 00:17:37.264 "ctrlr_loss_timeout_sec": 0, 00:17:37.264 "reconnect_delay_sec": 0, 00:17:37.264 "fast_io_fail_timeout_sec": 0, 00:17:37.264 "psk": "key0", 00:17:37.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.264 "hdgst": false, 00:17:37.264 "ddgst": false, 00:17:37.264 "multipath": "multipath" 00:17:37.264 } 00:17:37.264 }, 00:17:37.264 { 00:17:37.264 "method": "bdev_nvme_set_hotplug", 00:17:37.264 "params": { 00:17:37.264 "period_us": 100000, 00:17:37.264 "enable": false 00:17:37.264 } 00:17:37.264 }, 00:17:37.264 { 00:17:37.264 "method": "bdev_enable_histogram", 00:17:37.264 "params": { 00:17:37.264 "name": "nvme0n1", 00:17:37.264 "enable": true 00:17:37.264 } 00:17:37.264 }, 00:17:37.264 { 00:17:37.264 "method": "bdev_wait_for_examine" 00:17:37.264 } 00:17:37.264 ] 00:17:37.264 }, 00:17:37.264 { 00:17:37.264 "subsystem": "nbd", 00:17:37.264 "config": [] 00:17:37.264 } 00:17:37.264 ] 00:17:37.264 }' 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75262 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75262 ']' 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75262 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75262 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:37.264 killing process with pid 75262 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75262' 00:17:37.264 Received shutdown signal, test time was about 1.000000 seconds 00:17:37.264 00:17:37.264 Latency(us) 00:17:37.264 [2024-12-05T03:02:08.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.264 [2024-12-05T03:02:08.108Z] =================================================================================================================== 00:17:37.264 [2024-12-05T03:02:08.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75262 00:17:37.264 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75262 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75230 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75230 ']' 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75230 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75230 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.205 killing process with pid 75230 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75230' 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75230 00:17:38.205 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75230 00:17:39.144 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:39.144 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.144 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:39.144 "subsystems": [ 00:17:39.144 { 00:17:39.144 "subsystem": "keyring", 00:17:39.144 "config": [ 00:17:39.144 { 00:17:39.144 "method": "keyring_file_add_key", 00:17:39.144 "params": { 00:17:39.144 "name": "key0", 00:17:39.144 "path": "/tmp/tmp.L9GucbBzHC" 00:17:39.144 } 00:17:39.144 } 00:17:39.144 ] 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "subsystem": "iobuf", 00:17:39.144 "config": [ 00:17:39.144 { 00:17:39.144 "method": "iobuf_set_options", 00:17:39.144 "params": { 00:17:39.144 "small_pool_count": 8192, 00:17:39.144 "large_pool_count": 1024, 00:17:39.144 "small_bufsize": 8192, 00:17:39.144 "large_bufsize": 135168, 00:17:39.144 "enable_numa": false 00:17:39.144 } 00:17:39.144 } 00:17:39.144 ] 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "subsystem": "sock", 00:17:39.144 "config": [ 00:17:39.144 { 00:17:39.144 "method": "sock_set_default_impl", 00:17:39.144 "params": { 00:17:39.144 "impl_name": "uring" 00:17:39.144 } 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "method": "sock_impl_set_options", 00:17:39.144 "params": { 00:17:39.144 "impl_name": "ssl", 00:17:39.144 "recv_buf_size": 4096, 00:17:39.144 "send_buf_size": 4096, 00:17:39.144 "enable_recv_pipe": true, 00:17:39.144 "enable_quickack": false, 00:17:39.144 "enable_placement_id": 0, 00:17:39.144 "enable_zerocopy_send_server": true, 00:17:39.144 "enable_zerocopy_send_client": false, 00:17:39.144 "zerocopy_threshold": 0, 00:17:39.144 "tls_version": 0, 00:17:39.144 "enable_ktls": false 00:17:39.144 } 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "method": "sock_impl_set_options", 00:17:39.144 "params": { 00:17:39.144 "impl_name": "posix", 00:17:39.144 "recv_buf_size": 2097152, 00:17:39.144 "send_buf_size": 2097152, 00:17:39.144 "enable_recv_pipe": true, 00:17:39.144 "enable_quickack": false, 00:17:39.144 "enable_placement_id": 0, 00:17:39.144 "enable_zerocopy_send_server": true, 00:17:39.144 "enable_zerocopy_send_client": false, 00:17:39.144 "zerocopy_threshold": 0, 00:17:39.144 "tls_version": 0, 00:17:39.144 "enable_ktls": false 00:17:39.144 } 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "method": "sock_impl_set_options", 00:17:39.144 "params": { 00:17:39.144 "impl_name": "uring", 00:17:39.144 "recv_buf_size": 2097152, 00:17:39.144 "send_buf_size": 2097152, 00:17:39.144 "enable_recv_pipe": true, 00:17:39.144 "enable_quickack": false, 00:17:39.144 "enable_placement_id": 0, 00:17:39.144 "enable_zerocopy_send_server": false, 00:17:39.144 "enable_zerocopy_send_client": false, 00:17:39.144 "zerocopy_threshold": 0, 00:17:39.144 "tls_version": 0, 00:17:39.144 "enable_ktls": false 00:17:39.144 } 00:17:39.144 } 00:17:39.144 ] 00:17:39.144 }, 00:17:39.144 { 00:17:39.144 "subsystem": "vmd", 00:17:39.144 "config": [] 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "subsystem": "accel", 00:17:39.145 "config": [ 00:17:39.145 { 00:17:39.145 "method": "accel_set_options", 00:17:39.145 "params": { 00:17:39.145 "small_cache_size": 128, 00:17:39.145 "large_cache_size": 16, 00:17:39.145 "task_count": 2048, 00:17:39.145 "sequence_count": 2048, 00:17:39.145 "buf_count": 2048 00:17:39.145 } 00:17:39.145 } 00:17:39.145 ] 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "subsystem": "bdev", 00:17:39.145 "config": [ 00:17:39.145 { 00:17:39.145 "method": "bdev_set_options", 00:17:39.145 "params": { 00:17:39.145 "bdev_io_pool_size": 65535, 00:17:39.145 "bdev_io_cache_size": 256, 00:17:39.145 "bdev_auto_examine": true, 00:17:39.145 "iobuf_small_cache_size": 128, 00:17:39.145 "iobuf_large_cache_size": 16 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_raid_set_options", 00:17:39.145 "params": { 00:17:39.145 "process_window_size_kb": 1024, 00:17:39.145 "process_max_bandwidth_mb_sec": 0 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_iscsi_set_options", 00:17:39.145 "params": { 00:17:39.145 "timeout_sec": 30 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_nvme_set_options", 00:17:39.145 "params": { 00:17:39.145 "action_on_timeout": "none", 00:17:39.145 "timeout_us": 0, 00:17:39.145 "timeout_admin_us": 0, 00:17:39.145 "keep_alive_timeout_ms": 10000, 00:17:39.145 "arbitration_burst": 0, 00:17:39.145 "low_priority_weight": 0, 00:17:39.145 "medium_priority_weight": 0, 00:17:39.145 "high_priority_weight": 0, 00:17:39.145 "nvme_adminq_poll_period_us": 10000, 00:17:39.145 "nvme_ioq_poll_period_us": 0, 00:17:39.145 "io_queue_requests": 0, 00:17:39.145 "delay_cmd_submit": true, 00:17:39.145 "transport_retry_count": 4, 00:17:39.145 "bdev_retry_count": 3, 00:17:39.145 "transport_ack_timeout": 0, 00:17:39.145 "ctrlr_loss_timeout_sec": 0, 00:17:39.145 "reconnect_delay_sec": 0, 00:17:39.145 "fast_io_fail_timeout_sec": 0, 00:17:39.145 "disable_auto_failback": false, 00:17:39.145 "generate_uuids": false, 00:17:39.145 "transport_tos": 0, 00:17:39.145 "nvme_error_stat": false, 00:17:39.145 "rdma_srq_size": 0, 00:17:39.145 "io_path_stat": false, 00:17:39.145 "allow_accel_sequence": false, 00:17:39.145 "rdma_max_cq_size": 0, 00:17:39.145 "rdma_cm_event_timeout_ms": 0, 00:17:39.145 "dhchap_digests": [ 00:17:39.145 "sha256", 00:17:39.145 "sha384", 00:17:39.145 "sha512" 00:17:39.145 ], 00:17:39.145 "dhchap_dhgroups": [ 00:17:39.145 "null", 00:17:39.145 "ffdhe2048", 00:17:39.145 "ffdhe3072", 00:17:39.145 "ffdhe4096", 00:17:39.145 "ffdhe6144", 00:17:39.145 "ffdhe8192" 00:17:39.145 ] 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_nvme_set_hotplug", 00:17:39.145 "params": { 00:17:39.145 "period_us": 100000, 00:17:39.145 "enable": false 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_malloc_create", 00:17:39.145 "params": { 00:17:39.145 "name": "malloc0", 00:17:39.145 "num_blocks": 8192, 00:17:39.145 "block_size": 4096, 00:17:39.145 "physical_block_size": 4096, 00:17:39.145 "uuid": "7939427c-084e-4ce8-88bd-2a7bdd289a83", 00:17:39.145 "optimal_io_boundary": 0, 00:17:39.145 "md_size": 0, 00:17:39.145 "dif_type": 0, 00:17:39.145 "dif_is_head_of_md": false, 00:17:39.145 "dif_pi_format": 0 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "bdev_wait_for_examine" 00:17:39.145 } 00:17:39.145 ] 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "subsystem": "nbd", 00:17:39.145 "config": [] 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "subsystem": "scheduler", 00:17:39.145 "config": [ 00:17:39.145 { 00:17:39.145 "method": "framework_set_scheduler", 00:17:39.145 "params": { 00:17:39.145 "name": "static" 00:17:39.145 } 00:17:39.145 } 00:17:39.145 ] 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "subsystem": "nvmf", 00:17:39.145 "config": [ 00:17:39.145 { 00:17:39.145 "method": "nvmf_set_config", 00:17:39.145 "params": { 00:17:39.145 "discovery_filter": "match_any", 00:17:39.145 "admin_cmd_passthru": { 00:17:39.145 "identify_ctrlr": false 00:17:39.145 }, 00:17:39.145 "dhchap_digests": [ 00:17:39.145 "sha256", 00:17:39.145 "sha384", 00:17:39.145 "sha512" 00:17:39.145 ], 00:17:39.145 "dhchap_dhgroups": [ 00:17:39.145 "null", 00:17:39.145 "ffdhe2048", 00:17:39.145 "ffdhe3072", 00:17:39.145 "ffdhe4096", 00:17:39.145 "ffdhe6144", 00:17:39.145 "ffdhe8192" 00:17:39.145 ] 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "nvmf_set_max_subsystems", 00:17:39.145 "params": { 00:17:39.145 "max_subsystems": 1024 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "nvmf_set_crdt", 00:17:39.145 "params": { 00:17:39.145 "crdt1": 0, 00:17:39.145 "crdt2": 0, 00:17:39.145 "crdt3": 0 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "nvmf_create_transport", 00:17:39.145 "params": { 00:17:39.145 "trtype": "TCP", 00:17:39.145 "max_queue_depth": 128, 00:17:39.145 "max_io_qpairs_per_ctrlr": 127, 00:17:39.145 "in_capsule_data_size": 4096, 00:17:39.145 "max_io_size": 131072, 00:17:39.145 "io_unit_size": 131072, 00:17:39.145 "max_aq_depth": 128, 00:17:39.145 "num_shared_buffers": 511, 00:17:39.145 "buf_cache_size": 4294967295, 00:17:39.145 "dif_insert_or_strip": false, 00:17:39.145 "zcopy": false, 00:17:39.145 "c2h_success": false, 00:17:39.145 "sock_priority": 0, 00:17:39.145 "abort_timeout_sec": 1, 00:17:39.145 "ack_timeout": 0, 00:17:39.145 "data_wr_pool_size": 0 00:17:39.145 } 00:17:39.145 }, 00:17:39.145 { 00:17:39.145 "method": "nvmf_create_subsystem", 00:17:39.145 "params": { 00:17:39.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.146 "allow_any_host": false, 00:17:39.146 "serial_number": "00000000000000000000", 00:17:39.146 "model_number": "SPDK bdev Controller", 00:17:39.146 "max_namespaces": 32, 00:17:39.146 "min_cntlid": 1, 00:17:39.146 "max_cntlid": 65519, 00:17:39.146 "ana_reporting": false 00:17:39.146 } 00:17:39.146 }, 00:17:39.146 { 00:17:39.146 "method": "nvmf_subsystem_add_host", 00:17:39.146 "params": { 00:17:39.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.146 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.146 "psk": "key0" 00:17:39.146 } 00:17:39.146 }, 00:17:39.146 { 00:17:39.146 "method": "nvmf_subsystem_add_ns", 00:17:39.146 "params": { 00:17:39.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.146 "namespace": { 00:17:39.146 "nsid": 1, 00:17:39.146 "bdev_name": "malloc0", 00:17:39.146 "nguid": "7939427C084E4CE888BD2A7BDD289A83", 00:17:39.146 "uuid": "7939427c-084e-4ce8-88bd-2a7bdd289a83", 00:17:39.146 "no_auto_visible": false 00:17:39.146 } 00:17:39.146 } 00:17:39.146 }, 00:17:39.146 { 00:17:39.146 "method": "nvmf_subsystem_add_listener", 00:17:39.146 "params": { 00:17:39.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.146 "listen_address": { 00:17:39.146 "trtype": "TCP", 00:17:39.146 "adrfam": "IPv4", 00:17:39.146 "traddr": "10.0.0.3", 00:17:39.146 "trsvcid": "4420" 00:17:39.146 }, 00:17:39.146 "secure_channel": false, 00:17:39.146 "sock_impl": "ssl" 00:17:39.146 } 00:17:39.146 } 00:17:39.146 ] 00:17:39.146 } 00:17:39.146 ] 00:17:39.146 }' 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75336 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75336 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75336 ']' 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.146 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.146 [2024-12-05 03:02:09.787220] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:39.146 [2024-12-05 03:02:09.787362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.146 [2024-12-05 03:02:09.953660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.405 [2024-12-05 03:02:10.037111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.405 [2024-12-05 03:02:10.037173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.405 [2024-12-05 03:02:10.037190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.405 [2024-12-05 03:02:10.037210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.405 [2024-12-05 03:02:10.037222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.405 [2024-12-05 03:02:10.038337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.665 [2024-12-05 03:02:10.304791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.665 [2024-12-05 03:02:10.445393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.665 [2024-12-05 03:02:10.477376] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.665 [2024-12-05 03:02:10.477631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75368 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75368 /var/tmp/bdevperf.sock 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75368 ']' 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.925 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:39.925 "subsystems": [ 00:17:39.925 { 00:17:39.925 "subsystem": "keyring", 00:17:39.925 "config": [ 00:17:39.925 { 00:17:39.925 "method": "keyring_file_add_key", 00:17:39.925 "params": { 00:17:39.925 "name": "key0", 00:17:39.925 "path": "/tmp/tmp.L9GucbBzHC" 00:17:39.925 } 00:17:39.925 } 00:17:39.925 ] 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "subsystem": "iobuf", 00:17:39.925 "config": [ 00:17:39.925 { 00:17:39.925 "method": "iobuf_set_options", 00:17:39.925 "params": { 00:17:39.925 "small_pool_count": 8192, 00:17:39.925 "large_pool_count": 1024, 00:17:39.925 "small_bufsize": 8192, 00:17:39.925 "large_bufsize": 135168, 00:17:39.925 "enable_numa": false 00:17:39.925 } 00:17:39.925 } 00:17:39.925 ] 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "subsystem": "sock", 00:17:39.925 "config": [ 00:17:39.925 { 00:17:39.925 "method": "sock_set_default_impl", 00:17:39.925 "params": { 00:17:39.925 "impl_name": "uring" 00:17:39.925 } 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "method": "sock_impl_set_options", 00:17:39.925 "params": { 00:17:39.925 "impl_name": "ssl", 00:17:39.925 "recv_buf_size": 4096, 00:17:39.925 "send_buf_size": 4096, 00:17:39.925 "enable_recv_pipe": true, 00:17:39.925 "enable_quickack": false, 00:17:39.925 "enable_placement_id": 0, 00:17:39.925 "enable_zerocopy_send_server": true, 00:17:39.925 "enable_zerocopy_send_client": false, 00:17:39.925 "zerocopy_threshold": 0, 00:17:39.925 "tls_version": 0, 00:17:39.925 "enable_ktls": false 00:17:39.925 } 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "method": "sock_impl_set_options", 00:17:39.925 "params": { 00:17:39.925 "impl_name": "posix", 00:17:39.925 "recv_buf_size": 2097152, 00:17:39.925 "send_buf_size": 2097152, 00:17:39.925 "enable_recv_pipe": true, 00:17:39.925 "enable_quickack": false, 00:17:39.925 "enable_placement_id": 0, 00:17:39.925 "enable_zerocopy_send_server": true, 00:17:39.925 "enable_zerocopy_send_client": false, 00:17:39.925 "zerocopy_threshold": 0, 00:17:39.925 "tls_version": 0, 00:17:39.925 "enable_ktls": false 00:17:39.925 } 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "method": "sock_impl_set_options", 00:17:39.925 "params": { 00:17:39.925 "impl_name": "uring", 00:17:39.925 "recv_buf_size": 2097152, 00:17:39.925 "send_buf_size": 2097152, 00:17:39.925 "enable_recv_pipe": true, 00:17:39.925 "enable_quickack": false, 00:17:39.925 "enable_placement_id": 0, 00:17:39.925 "enable_zerocopy_send_server": false, 00:17:39.925 "enable_zerocopy_send_client": false, 00:17:39.925 "zerocopy_threshold": 0, 00:17:39.925 "tls_version": 0, 00:17:39.925 "enable_ktls": false 00:17:39.925 } 00:17:39.925 } 00:17:39.925 ] 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "subsystem": "vmd", 00:17:39.925 "config": [] 00:17:39.925 }, 00:17:39.925 { 00:17:39.925 "subsystem": "accel", 00:17:39.925 "config": [ 00:17:39.925 { 00:17:39.925 "method": "accel_set_options", 00:17:39.925 "params": { 00:17:39.926 "small_cache_size": 128, 00:17:39.926 "large_cache_size": 16, 00:17:39.926 "task_count": 2048, 00:17:39.926 "sequence_count": 2048, 00:17:39.926 "buf_count": 2048 00:17:39.926 } 00:17:39.926 } 00:17:39.926 ] 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "subsystem": "bdev", 00:17:39.926 "config": [ 00:17:39.926 { 00:17:39.926 "method": "bdev_set_options", 00:17:39.926 "params": { 00:17:39.926 "bdev_io_pool_size": 65535, 00:17:39.926 "bdev_io_cache_size": 256, 00:17:39.926 "bdev_auto_examine": true, 00:17:39.926 "iobuf_small_cache_size": 128, 00:17:39.926 "iobuf_large_cache_size": 16 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_raid_set_options", 00:17:39.926 "params": { 00:17:39.926 "process_window_size_kb": 1024, 00:17:39.926 "process_max_bandwidth_mb_sec": 0 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_iscsi_set_options", 00:17:39.926 "params": { 00:17:39.926 "timeout_sec": 30 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_nvme_set_options", 00:17:39.926 "params": { 00:17:39.926 "action_on_timeout": "none", 00:17:39.926 "timeout_us": 0, 00:17:39.926 "timeout_admin_us": 0, 00:17:39.926 "keep_alive_timeout_ms": 10000, 00:17:39.926 "arbitration_burst": 0, 00:17:39.926 "low_priority_weight": 0, 00:17:39.926 "medium_priority_weight": 0, 00:17:39.926 "high_priority_weight": 0, 00:17:39.926 "nvme_adminq_poll_period_us": 10000, 00:17:39.926 "nvme_ioq_poll_period_us": 0, 00:17:39.926 "io_queue_requests": 512, 00:17:39.926 "delay_cmd_submit": true, 00:17:39.926 "transport_retry_count": 4, 00:17:39.926 "bdev_retry_count": 3, 00:17:39.926 "transport_ack_timeout": 0, 00:17:39.926 "ctrlr_loss_timeout_sec": 0, 00:17:39.926 "reconnect_delay_sec": 0, 00:17:39.926 "fast_io_fail_timeout_sec": 0, 00:17:39.926 "disable_auto_failback": false, 00:17:39.926 "generate_uuids": false, 00:17:39.926 "transport_tos": 0, 00:17:39.926 "nvme_error_stat": false, 00:17:39.926 "rdma_srq_size": 0, 00:17:39.926 "io_path_stat": false, 00:17:39.926 "allow_accel_sequence": false, 00:17:39.926 "rdma_max_cq_size": 0, 00:17:39.926 "rdma_cm_event_timeout_ms": 0, 00:17:39.926 "dhchap_digests": [ 00:17:39.926 "sha256", 00:17:39.926 "sha384", 00:17:39.926 "sha512" 00:17:39.926 ], 00:17:39.926 "dhchap_dhgroups": [ 00:17:39.926 "null", 00:17:39.926 "ffdhe2048", 00:17:39.926 "ffdhe3072", 00:17:39.926 "ffdhe4096", 00:17:39.926 "ffdhe6144", 00:17:39.926 "ffdhe8192" 00:17:39.926 ] 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_nvme_attach_controller", 00:17:39.926 "params": { 00:17:39.926 "name": "nvme0", 00:17:39.926 "trtype": "TCP", 00:17:39.926 "adrfam": "IPv4", 00:17:39.926 "traddr": "10.0.0.3", 00:17:39.926 "trsvcid": "4420", 00:17:39.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.926 "prchk_reftag": false, 00:17:39.926 "prchk_guard": false, 00:17:39.926 "ctrlr_loss_timeout_sec": 0, 00:17:39.926 "reconnect_delay_sec": 0, 00:17:39.926 "fast_io_fail_timeout_sec": 0, 00:17:39.926 "psk": "key0", 00:17:39.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.926 "hdgst": false, 00:17:39.926 "ddgst": false, 00:17:39.926 "multipath": "multipath" 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_nvme_set_hotplug", 00:17:39.926 "params": { 00:17:39.926 "period_us": 100000, 00:17:39.926 "enable": false 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_enable_histogram", 00:17:39.926 "params": { 00:17:39.926 "name": "nvme0n1", 00:17:39.926 "enable": true 00:17:39.926 } 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "method": "bdev_wait_for_examine" 00:17:39.926 } 00:17:39.926 ] 00:17:39.926 }, 00:17:39.926 { 00:17:39.926 "subsystem": "nbd", 00:17:39.926 "config": [] 00:17:39.926 } 00:17:39.926 ] 00:17:39.926 }' 00:17:39.926 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.926 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.926 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.185 [2024-12-05 03:02:10.864530] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:40.185 [2024-12-05 03:02:10.864707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75368 ] 00:17:40.445 [2024-12-05 03:02:11.038532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.445 [2024-12-05 03:02:11.126879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.703 [2024-12-05 03:02:11.363573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.703 [2024-12-05 03:02:11.466527] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.271 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.271 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:41.271 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:41.271 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.271 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.271 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:41.529 Running I/O for 1 seconds... 00:17:42.465 3412.00 IOPS, 13.33 MiB/s 00:17:42.465 Latency(us) 00:17:42.465 [2024-12-05T03:02:13.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.465 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:42.465 Verification LBA range: start 0x0 length 0x2000 00:17:42.465 nvme0n1 : 1.03 3436.29 13.42 0.00 0.00 36689.71 7745.16 23354.65 00:17:42.465 [2024-12-05T03:02:13.309Z] =================================================================================================================== 00:17:42.465 [2024-12-05T03:02:13.309Z] Total : 3436.29 13.42 0.00 0.00 36689.71 7745.16 23354.65 00:17:42.465 { 00:17:42.465 "results": [ 00:17:42.465 { 00:17:42.465 "job": "nvme0n1", 00:17:42.465 "core_mask": "0x2", 00:17:42.465 "workload": "verify", 00:17:42.465 "status": "finished", 00:17:42.465 "verify_range": { 00:17:42.465 "start": 0, 00:17:42.465 "length": 8192 00:17:42.465 }, 00:17:42.465 "queue_depth": 128, 00:17:42.465 "io_size": 4096, 00:17:42.465 "runtime": 1.030473, 00:17:42.465 "iops": 3436.2860550446253, 00:17:42.465 "mibps": 13.422992402518068, 00:17:42.465 "io_failed": 0, 00:17:42.465 "io_timeout": 0, 00:17:42.465 "avg_latency_us": 36689.71152473621, 00:17:42.465 "min_latency_us": 7745.163636363636, 00:17:42.465 "max_latency_us": 23354.647272727274 00:17:42.465 } 00:17:42.465 ], 00:17:42.465 "core_count": 1 00:17:42.465 } 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:42.466 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:42.466 nvmf_trace.0 00:17:42.724 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:42.724 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75368 00:17:42.724 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75368 ']' 00:17:42.724 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75368 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75368 00:17:42.725 killing process with pid 75368 00:17:42.725 Received shutdown signal, test time was about 1.000000 seconds 00:17:42.725 00:17:42.725 Latency(us) 00:17:42.725 [2024-12-05T03:02:13.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.725 [2024-12-05T03:02:13.569Z] =================================================================================================================== 00:17:42.725 [2024-12-05T03:02:13.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75368' 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75368 00:17:42.725 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75368 00:17:43.292 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:43.292 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.292 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:43.551 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.552 rmmod nvme_tcp 00:17:43.552 rmmod nvme_fabrics 00:17:43.552 rmmod nvme_keyring 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 75336 ']' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 75336 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75336 ']' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75336 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75336 00:17:43.552 killing process with pid 75336 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75336' 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75336 00:17:43.552 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75336 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.514 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EkH5qNGVg0 /tmp/tmp.zcjTUcDeB9 /tmp/tmp.L9GucbBzHC 00:17:44.773 00:17:44.773 real 1m47.580s 00:17:44.773 user 2m59.028s 00:17:44.773 sys 0m26.269s 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.773 ************************************ 00:17:44.773 END TEST nvmf_tls 00:17:44.773 ************************************ 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.773 ************************************ 00:17:44.773 START TEST nvmf_fips 00:17:44.773 ************************************ 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:44.773 * Looking for test storage... 00:17:44.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:44.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.773 --rc genhtml_branch_coverage=1 00:17:44.773 --rc genhtml_function_coverage=1 00:17:44.773 --rc genhtml_legend=1 00:17:44.773 --rc geninfo_all_blocks=1 00:17:44.773 --rc geninfo_unexecuted_blocks=1 00:17:44.773 00:17:44.773 ' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:44.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.773 --rc genhtml_branch_coverage=1 00:17:44.773 --rc genhtml_function_coverage=1 00:17:44.773 --rc genhtml_legend=1 00:17:44.773 --rc geninfo_all_blocks=1 00:17:44.773 --rc geninfo_unexecuted_blocks=1 00:17:44.773 00:17:44.773 ' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:44.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.773 --rc genhtml_branch_coverage=1 00:17:44.773 --rc genhtml_function_coverage=1 00:17:44.773 --rc genhtml_legend=1 00:17:44.773 --rc geninfo_all_blocks=1 00:17:44.773 --rc geninfo_unexecuted_blocks=1 00:17:44.773 00:17:44.773 ' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:44.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.773 --rc genhtml_branch_coverage=1 00:17:44.773 --rc genhtml_function_coverage=1 00:17:44.773 --rc genhtml_legend=1 00:17:44.773 --rc geninfo_all_blocks=1 00:17:44.773 --rc geninfo_unexecuted_blocks=1 00:17:44.773 00:17:44.773 ' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.773 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.033 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:45.033 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:45.034 Error setting digest 00:17:45.034 4022110DA57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:45.034 4022110DA57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:45.034 Cannot find device "nvmf_init_br" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:45.034 Cannot find device "nvmf_init_br2" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:45.034 Cannot find device "nvmf_tgt_br" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.034 Cannot find device "nvmf_tgt_br2" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:45.034 Cannot find device "nvmf_init_br" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:45.034 Cannot find device "nvmf_init_br2" 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:45.034 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:45.293 Cannot find device "nvmf_tgt_br" 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:45.293 Cannot find device "nvmf_tgt_br2" 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:45.293 Cannot find device "nvmf_br" 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:45.293 Cannot find device "nvmf_init_if" 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:45.293 Cannot find device "nvmf_init_if2" 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:45.293 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:45.293 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.294 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:45.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:45.553 00:17:45.553 --- 10.0.0.3 ping statistics --- 00:17:45.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.553 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:45.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:45.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:45.553 00:17:45.553 --- 10.0.0.4 ping statistics --- 00:17:45.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.553 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:45.553 00:17:45.553 --- 10.0.0.1 ping statistics --- 00:17:45.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.553 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:45.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:45.553 00:17:45.553 --- 10.0.0.2 ping statistics --- 00:17:45.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.553 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:45.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=75704 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 75704 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75704 ']' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.553 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:45.553 [2024-12-05 03:02:16.372475] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:45.553 [2024-12-05 03:02:16.373029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.813 [2024-12-05 03:02:16.564216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.072 [2024-12-05 03:02:16.755608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.072 [2024-12-05 03:02:16.756045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.072 [2024-12-05 03:02:16.756294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.072 [2024-12-05 03:02:16.756509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.072 [2024-12-05 03:02:16.756546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.072 [2024-12-05 03:02:16.758316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.331 [2024-12-05 03:02:16.934095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BFq 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BFq 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BFq 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BFq 00:17:46.590 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.849 [2024-12-05 03:02:17.567271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.849 [2024-12-05 03:02:17.583198] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.849 [2024-12-05 03:02:17.583492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.849 malloc0 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75746 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75746 /var/tmp/bdevperf.sock 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75746 ']' 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.849 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:47.107 [2024-12-05 03:02:17.785747] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:47.107 [2024-12-05 03:02:17.786146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75746 ] 00:17:47.366 [2024-12-05 03:02:17.962486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.366 [2024-12-05 03:02:18.088076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.625 [2024-12-05 03:02:18.262677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.884 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.884 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:47.884 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BFq 00:17:48.143 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:48.402 [2024-12-05 03:02:19.134276] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.402 TLSTESTn1 00:17:48.402 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:48.661 Running I/O for 10 seconds... 00:17:50.536 3150.00 IOPS, 12.30 MiB/s [2024-12-05T03:02:22.760Z] 3202.50 IOPS, 12.51 MiB/s [2024-12-05T03:02:23.697Z] 3245.67 IOPS, 12.68 MiB/s [2024-12-05T03:02:24.633Z] 3250.50 IOPS, 12.70 MiB/s [2024-12-05T03:02:25.570Z] 3244.40 IOPS, 12.67 MiB/s [2024-12-05T03:02:26.505Z] 3256.83 IOPS, 12.72 MiB/s [2024-12-05T03:02:27.442Z] 3267.00 IOPS, 12.76 MiB/s [2024-12-05T03:02:28.451Z] 3274.62 IOPS, 12.79 MiB/s [2024-12-05T03:02:29.389Z] 3273.78 IOPS, 12.79 MiB/s [2024-12-05T03:02:29.389Z] 3278.90 IOPS, 12.81 MiB/s 00:17:58.545 Latency(us) 00:17:58.545 [2024-12-05T03:02:29.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.545 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.545 Verification LBA range: start 0x0 length 0x2000 00:17:58.545 TLSTESTn1 : 10.02 3285.16 12.83 0.00 0.00 38894.45 5957.82 31218.97 00:17:58.545 [2024-12-05T03:02:29.389Z] =================================================================================================================== 00:17:58.545 [2024-12-05T03:02:29.389Z] Total : 3285.16 12.83 0.00 0.00 38894.45 5957.82 31218.97 00:17:58.545 { 00:17:58.545 "results": [ 00:17:58.545 { 00:17:58.545 "job": "TLSTESTn1", 00:17:58.545 "core_mask": "0x4", 00:17:58.545 "workload": "verify", 00:17:58.545 "status": "finished", 00:17:58.545 "verify_range": { 00:17:58.545 "start": 0, 00:17:58.545 "length": 8192 00:17:58.545 }, 00:17:58.545 "queue_depth": 128, 00:17:58.545 "io_size": 4096, 00:17:58.545 "runtime": 10.019911, 00:17:58.545 "iops": 3285.15892007424, 00:17:58.545 "mibps": 12.83265203154, 00:17:58.545 "io_failed": 0, 00:17:58.545 "io_timeout": 0, 00:17:58.545 "avg_latency_us": 38894.44640917791, 00:17:58.545 "min_latency_us": 5957.818181818182, 00:17:58.545 "max_latency_us": 31218.967272727274 00:17:58.545 } 00:17:58.545 ], 00:17:58.545 "core_count": 1 00:17:58.545 } 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:58.545 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:58.804 nvmf_trace.0 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75746 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75746 ']' 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75746 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.804 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75746 00:17:58.804 killing process with pid 75746 00:17:58.804 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.804 00:17:58.804 Latency(us) 00:17:58.805 [2024-12-05T03:02:29.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.805 [2024-12-05T03:02:29.649Z] =================================================================================================================== 00:17:58.805 [2024-12-05T03:02:29.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.805 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.805 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.805 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75746' 00:17:58.805 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75746 00:17:58.805 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75746 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.741 rmmod nvme_tcp 00:17:59.741 rmmod nvme_fabrics 00:17:59.741 rmmod nvme_keyring 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 75704 ']' 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 75704 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75704 ']' 00:17:59.741 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75704 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75704 00:18:00.001 killing process with pid 75704 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75704' 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75704 00:18:00.001 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75704 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.938 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.939 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BFq 00:18:01.198 ************************************ 00:18:01.198 END TEST nvmf_fips 00:18:01.198 ************************************ 00:18:01.198 00:18:01.198 real 0m16.399s 00:18:01.198 user 0m23.634s 00:18:01.198 sys 0m5.329s 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.198 ************************************ 00:18:01.198 START TEST nvmf_control_msg_list 00:18:01.198 ************************************ 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:01.198 * Looking for test storage... 00:18:01.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:18:01.198 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.198 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.199 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.459 --rc genhtml_branch_coverage=1 00:18:01.459 --rc genhtml_function_coverage=1 00:18:01.459 --rc genhtml_legend=1 00:18:01.459 --rc geninfo_all_blocks=1 00:18:01.459 --rc geninfo_unexecuted_blocks=1 00:18:01.459 00:18:01.459 ' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.459 --rc genhtml_branch_coverage=1 00:18:01.459 --rc genhtml_function_coverage=1 00:18:01.459 --rc genhtml_legend=1 00:18:01.459 --rc geninfo_all_blocks=1 00:18:01.459 --rc geninfo_unexecuted_blocks=1 00:18:01.459 00:18:01.459 ' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.459 --rc genhtml_branch_coverage=1 00:18:01.459 --rc genhtml_function_coverage=1 00:18:01.459 --rc genhtml_legend=1 00:18:01.459 --rc geninfo_all_blocks=1 00:18:01.459 --rc geninfo_unexecuted_blocks=1 00:18:01.459 00:18:01.459 ' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.459 --rc genhtml_branch_coverage=1 00:18:01.459 --rc genhtml_function_coverage=1 00:18:01.459 --rc genhtml_legend=1 00:18:01.459 --rc geninfo_all_blocks=1 00:18:01.459 --rc geninfo_unexecuted_blocks=1 00:18:01.459 00:18:01.459 ' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.459 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:01.460 Cannot find device "nvmf_init_br" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:01.460 Cannot find device "nvmf_init_br2" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:01.460 Cannot find device "nvmf_tgt_br" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.460 Cannot find device "nvmf_tgt_br2" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:01.460 Cannot find device "nvmf_init_br" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:01.460 Cannot find device "nvmf_init_br2" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:01.460 Cannot find device "nvmf_tgt_br" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:01.460 Cannot find device "nvmf_tgt_br2" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:01.460 Cannot find device "nvmf_br" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:01.460 Cannot find device "nvmf_init_if" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:01.460 Cannot find device "nvmf_init_if2" 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.460 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.719 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.719 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:18:01.719 00:18:01.719 --- 10.0.0.3 ping statistics --- 00:18:01.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.719 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:01.719 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.719 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.719 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:01.719 00:18:01.720 --- 10.0.0.4 ping statistics --- 00:18:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.720 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:01.720 00:18:01.720 --- 10.0.0.1 ping statistics --- 00:18:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.720 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:01.720 00:18:01.720 --- 10.0.0.2 ping statistics --- 00:18:01.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.720 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76157 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76157 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 76157 ']' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.720 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.979 [2024-12-05 03:02:32.589744] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:01.979 [2024-12-05 03:02:32.589936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.979 [2024-12-05 03:02:32.775058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.254 [2024-12-05 03:02:32.898562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.256 [2024-12-05 03:02:32.898653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.256 [2024-12-05 03:02:32.898689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.256 [2024-12-05 03:02:32.898718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.256 [2024-12-05 03:02:32.898970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.256 [2024-12-05 03:02:32.900417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.256 [2024-12-05 03:02:33.066075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 [2024-12-05 03:02:33.617573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 Malloc0 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.826 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:03.085 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.085 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:03.085 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 [2024-12-05 03:02:33.673951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76189 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76190 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76191 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:03.086 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76189 00:18:03.086 [2024-12-05 03:02:33.913331] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:03.086 [2024-12-05 03:02:33.913980] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:03.086 [2024-12-05 03:02:33.923852] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:04.471 Initializing NVMe Controllers 00:18:04.471 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:04.471 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:04.471 Initialization complete. Launching workers. 00:18:04.471 ======================================================== 00:18:04.471 Latency(us) 00:18:04.471 Device Information : IOPS MiB/s Average min max 00:18:04.471 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2770.88 10.82 360.36 249.04 1017.88 00:18:04.471 ======================================================== 00:18:04.471 Total : 2770.88 10.82 360.36 249.04 1017.88 00:18:04.471 00:18:04.471 Initializing NVMe Controllers 00:18:04.471 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:04.471 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:04.471 Initialization complete. Launching workers. 00:18:04.471 ======================================================== 00:18:04.471 Latency(us) 00:18:04.471 Device Information : IOPS MiB/s Average min max 00:18:04.471 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2773.00 10.83 360.19 251.70 1014.84 00:18:04.471 ======================================================== 00:18:04.471 Total : 2773.00 10.83 360.19 251.70 1014.84 00:18:04.471 00:18:04.471 Initializing NVMe Controllers 00:18:04.471 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:04.471 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:04.471 Initialization complete. Launching workers. 00:18:04.471 ======================================================== 00:18:04.471 Latency(us) 00:18:04.471 Device Information : IOPS MiB/s Average min max 00:18:04.471 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2802.99 10.95 356.26 151.84 997.44 00:18:04.471 ======================================================== 00:18:04.471 Total : 2802.99 10.95 356.26 151.84 997.44 00:18:04.471 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76190 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76191 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.471 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.471 rmmod nvme_tcp 00:18:04.471 rmmod nvme_fabrics 00:18:04.471 rmmod nvme_keyring 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76157 ']' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76157 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 76157 ']' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 76157 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76157 00:18:04.471 killing process with pid 76157 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76157' 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 76157 00:18:04.471 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 76157 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.405 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:05.664 00:18:05.664 real 0m4.401s 00:18:05.664 user 0m6.654s 00:18:05.664 sys 0m1.439s 00:18:05.664 ************************************ 00:18:05.664 END TEST nvmf_control_msg_list 00:18:05.664 ************************************ 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.664 ************************************ 00:18:05.664 START TEST nvmf_wait_for_buf 00:18:05.664 ************************************ 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:05.664 * Looking for test storage... 00:18:05.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:05.664 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.925 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.926 --rc genhtml_branch_coverage=1 00:18:05.926 --rc genhtml_function_coverage=1 00:18:05.926 --rc genhtml_legend=1 00:18:05.926 --rc geninfo_all_blocks=1 00:18:05.926 --rc geninfo_unexecuted_blocks=1 00:18:05.926 00:18:05.926 ' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.926 --rc genhtml_branch_coverage=1 00:18:05.926 --rc genhtml_function_coverage=1 00:18:05.926 --rc genhtml_legend=1 00:18:05.926 --rc geninfo_all_blocks=1 00:18:05.926 --rc geninfo_unexecuted_blocks=1 00:18:05.926 00:18:05.926 ' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.926 --rc genhtml_branch_coverage=1 00:18:05.926 --rc genhtml_function_coverage=1 00:18:05.926 --rc genhtml_legend=1 00:18:05.926 --rc geninfo_all_blocks=1 00:18:05.926 --rc geninfo_unexecuted_blocks=1 00:18:05.926 00:18:05.926 ' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.926 --rc genhtml_branch_coverage=1 00:18:05.926 --rc genhtml_function_coverage=1 00:18:05.926 --rc genhtml_legend=1 00:18:05.926 --rc geninfo_all_blocks=1 00:18:05.926 --rc geninfo_unexecuted_blocks=1 00:18:05.926 00:18:05.926 ' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:05.926 Cannot find device "nvmf_init_br" 00:18:05.926 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:05.927 Cannot find device "nvmf_init_br2" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:05.927 Cannot find device "nvmf_tgt_br" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.927 Cannot find device "nvmf_tgt_br2" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:05.927 Cannot find device "nvmf_init_br" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:05.927 Cannot find device "nvmf_init_br2" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:05.927 Cannot find device "nvmf_tgt_br" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:05.927 Cannot find device "nvmf_tgt_br2" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:05.927 Cannot find device "nvmf_br" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:05.927 Cannot find device "nvmf_init_if" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:05.927 Cannot find device "nvmf_init_if2" 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.927 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:06.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:06.187 00:18:06.187 --- 10.0.0.3 ping statistics --- 00:18:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.187 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:06.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:06.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:18:06.187 00:18:06.187 --- 10.0.0.4 ping statistics --- 00:18:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.187 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:06.187 00:18:06.187 --- 10.0.0.1 ping statistics --- 00:18:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.187 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:06.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:06.187 00:18:06.187 --- 10.0.0.2 ping statistics --- 00:18:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.187 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.187 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.188 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=76441 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 76441 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 76441 ']' 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.188 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.447 [2024-12-05 03:02:37.099386] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:06.447 [2024-12-05 03:02:37.099525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.447 [2024-12-05 03:02:37.266364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.706 [2024-12-05 03:02:37.356994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.706 [2024-12-05 03:02:37.357056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.706 [2024-12-05 03:02:37.357090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.706 [2024-12-05 03:02:37.357112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.706 [2024-12-05 03:02:37.357124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.706 [2024-12-05 03:02:37.358370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.275 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.275 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:07.275 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.275 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.275 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.534 [2024-12-05 03:02:38.264817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.534 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.793 Malloc0 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.793 [2024-12-05 03:02:38.403974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:07.793 [2024-12-05 03:02:38.428187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.793 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.051 [2024-12-05 03:02:38.678944] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:09.428 Initializing NVMe Controllers 00:18:09.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:09.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:09.428 Initialization complete. Launching workers. 00:18:09.428 ======================================================== 00:18:09.428 Latency(us) 00:18:09.428 Device Information : IOPS MiB/s Average min max 00:18:09.428 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 479.06 59.88 8350.03 4591.40 16072.54 00:18:09.428 ======================================================== 00:18:09.428 Total : 479.06 59.88 8350.03 4591.40 16072.54 00:18:09.428 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4560 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4560 -eq 0 ]] 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.428 rmmod nvme_tcp 00:18:09.428 rmmod nvme_fabrics 00:18:09.428 rmmod nvme_keyring 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 76441 ']' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 76441 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 76441 ']' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 76441 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76441 00:18:09.428 killing process with pid 76441 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76441' 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 76441 00:18:09.428 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 76441 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.369 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:10.629 00:18:10.629 real 0m4.922s 00:18:10.629 user 0m4.465s 00:18:10.629 sys 0m0.892s 00:18:10.629 ************************************ 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:10.629 END TEST nvmf_wait_for_buf 00:18:10.629 ************************************ 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.629 ************************************ 00:18:10.629 START TEST nvmf_fuzz 00:18:10.629 ************************************ 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:10.629 * Looking for test storage... 00:18:10.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:10.629 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:10.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.890 --rc genhtml_branch_coverage=1 00:18:10.890 --rc genhtml_function_coverage=1 00:18:10.890 --rc genhtml_legend=1 00:18:10.890 --rc geninfo_all_blocks=1 00:18:10.890 --rc geninfo_unexecuted_blocks=1 00:18:10.890 00:18:10.890 ' 00:18:10.890 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:10.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.890 --rc genhtml_branch_coverage=1 00:18:10.890 --rc genhtml_function_coverage=1 00:18:10.890 --rc genhtml_legend=1 00:18:10.890 --rc geninfo_all_blocks=1 00:18:10.891 --rc geninfo_unexecuted_blocks=1 00:18:10.891 00:18:10.891 ' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:10.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.891 --rc genhtml_branch_coverage=1 00:18:10.891 --rc genhtml_function_coverage=1 00:18:10.891 --rc genhtml_legend=1 00:18:10.891 --rc geninfo_all_blocks=1 00:18:10.891 --rc geninfo_unexecuted_blocks=1 00:18:10.891 00:18:10.891 ' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:10.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.891 --rc genhtml_branch_coverage=1 00:18:10.891 --rc genhtml_function_coverage=1 00:18:10.891 --rc genhtml_legend=1 00:18:10.891 --rc geninfo_all_blocks=1 00:18:10.891 --rc geninfo_unexecuted_blocks=1 00:18:10.891 00:18:10.891 ' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:10.891 Cannot find device "nvmf_init_br" 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:10.891 Cannot find device "nvmf_init_br2" 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:10.891 Cannot find device "nvmf_tgt_br" 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.891 Cannot find device "nvmf_tgt_br2" 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:10.891 Cannot find device "nvmf_init_br" 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:10.891 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:10.891 Cannot find device "nvmf_init_br2" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:10.892 Cannot find device "nvmf_tgt_br" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:10.892 Cannot find device "nvmf_tgt_br2" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:10.892 Cannot find device "nvmf_br" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:10.892 Cannot find device "nvmf_init_if" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:10.892 Cannot find device "nvmf_init_if2" 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.892 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.151 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:11.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:11.152 00:18:11.152 --- 10.0.0.3 ping statistics --- 00:18:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.152 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:11.152 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:11.152 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:11.152 00:18:11.152 --- 10.0.0.4 ping statistics --- 00:18:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.152 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:11.152 00:18:11.152 --- 10.0.0.1 ping statistics --- 00:18:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.152 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:11.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:11.152 00:18:11.152 --- 10.0.0.2 ping statistics --- 00:18:11.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.152 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=76742 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 76742 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 76742 ']' 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.152 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 Malloc0 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:12.534 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:13.105 Shutting down the fuzz application 00:18:13.105 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:13.674 Shutting down the fuzz application 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.674 rmmod nvme_tcp 00:18:13.674 rmmod nvme_fabrics 00:18:13.674 rmmod nvme_keyring 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 76742 ']' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 76742 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 76742 ']' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 76742 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76742 00:18:13.674 killing process with pid 76742 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76742' 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 76742 00:18:13.674 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 76742 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:15.053 00:18:15.053 real 0m4.401s 00:18:15.053 user 0m4.658s 00:18:15.053 sys 0m0.890s 00:18:15.053 ************************************ 00:18:15.053 END TEST nvmf_fuzz 00:18:15.053 ************************************ 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:15.053 03:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.054 ************************************ 00:18:15.054 START TEST nvmf_multiconnection 00:18:15.054 ************************************ 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:15.054 * Looking for test storage... 00:18:15.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:18:15.054 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.314 --rc genhtml_branch_coverage=1 00:18:15.314 --rc genhtml_function_coverage=1 00:18:15.314 --rc genhtml_legend=1 00:18:15.314 --rc geninfo_all_blocks=1 00:18:15.314 --rc geninfo_unexecuted_blocks=1 00:18:15.314 00:18:15.314 ' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.314 --rc genhtml_branch_coverage=1 00:18:15.314 --rc genhtml_function_coverage=1 00:18:15.314 --rc genhtml_legend=1 00:18:15.314 --rc geninfo_all_blocks=1 00:18:15.314 --rc geninfo_unexecuted_blocks=1 00:18:15.314 00:18:15.314 ' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.314 --rc genhtml_branch_coverage=1 00:18:15.314 --rc genhtml_function_coverage=1 00:18:15.314 --rc genhtml_legend=1 00:18:15.314 --rc geninfo_all_blocks=1 00:18:15.314 --rc geninfo_unexecuted_blocks=1 00:18:15.314 00:18:15.314 ' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.314 --rc genhtml_branch_coverage=1 00:18:15.314 --rc genhtml_function_coverage=1 00:18:15.314 --rc genhtml_legend=1 00:18:15.314 --rc geninfo_all_blocks=1 00:18:15.314 --rc geninfo_unexecuted_blocks=1 00:18:15.314 00:18:15.314 ' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.314 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.315 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:15.315 Cannot find device "nvmf_init_br" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:15.315 Cannot find device "nvmf_init_br2" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:15.315 Cannot find device "nvmf_tgt_br" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.315 Cannot find device "nvmf_tgt_br2" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:15.315 Cannot find device "nvmf_init_br" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:15.315 Cannot find device "nvmf_init_br2" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:15.315 Cannot find device "nvmf_tgt_br" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:15.315 Cannot find device "nvmf_tgt_br2" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:15.315 Cannot find device "nvmf_br" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:15.315 Cannot find device "nvmf_init_if" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:15.315 Cannot find device "nvmf_init_if2" 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:15.315 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.574 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.574 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:15.575 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.575 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:15.575 00:18:15.575 --- 10.0.0.3 ping statistics --- 00:18:15.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.575 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:15.575 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:15.575 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:18:15.575 00:18:15.575 --- 10.0.0.4 ping statistics --- 00:18:15.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.575 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:15.575 00:18:15.575 --- 10.0.0.1 ping statistics --- 00:18:15.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.575 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:15.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:15.575 00:18:15.575 --- 10.0.0.2 ping statistics --- 00:18:15.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.575 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.575 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=77006 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 77006 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 77006 ']' 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.835 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:15.835 [2024-12-05 03:02:46.550730] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:15.835 [2024-12-05 03:02:46.550971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.094 [2024-12-05 03:02:46.732263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.094 [2024-12-05 03:02:46.820138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.094 [2024-12-05 03:02:46.820451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.094 [2024-12-05 03:02:46.820484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.094 [2024-12-05 03:02:46.820497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.094 [2024-12-05 03:02:46.820508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.094 [2024-12-05 03:02:46.822371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.094 [2024-12-05 03:02:46.822535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.094 [2024-12-05 03:02:46.823267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.094 [2024-12-05 03:02:46.823293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.353 [2024-12-05 03:02:46.998030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 [2024-12-05 03:02:47.502780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 Malloc1 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 [2024-12-05 03:02:47.617724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.947 Malloc2 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.947 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 Malloc3 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.214 Malloc4 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.214 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 Malloc5 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 Malloc6 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.215 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.474 Malloc7 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.474 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 Malloc8 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.475 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.734 Malloc9 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.734 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 Malloc10 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 Malloc11 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.735 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:17.994 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:17.994 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:17.994 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.994 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:17.994 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:19.900 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:19.900 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:19.900 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:19.900 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:19.901 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.901 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:19.901 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.901 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:20.160 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:20.160 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:20.160 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.160 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:20.160 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.066 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:22.325 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:22.325 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:22.325 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.325 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:22.325 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:24.232 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:24.490 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:24.490 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.490 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.490 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:24.490 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.393 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:26.394 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.394 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:26.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:26.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:26.652 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.653 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:26.653 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.559 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:18:28.818 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:28.818 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:28.818 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.818 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:28.818 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:30.725 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:18:30.984 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:30.984 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:30.984 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.984 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:30.984 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.889 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:18:33.148 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:33.148 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:33.148 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.148 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:33.148 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.052 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:18:35.311 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:35.311 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:35.311 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.311 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:35.311 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:37.215 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:18:37.474 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:37.474 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:37.474 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.474 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:37.474 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.425 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:18:39.683 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:39.683 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:39.684 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.684 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:39.684 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:41.587 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:41.587 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:41.587 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:18:41.845 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:41.845 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.845 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:41.845 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:41.845 [global] 00:18:41.845 thread=1 00:18:41.845 invalidate=1 00:18:41.845 rw=read 00:18:41.845 time_based=1 00:18:41.845 runtime=10 00:18:41.845 ioengine=libaio 00:18:41.845 direct=1 00:18:41.845 bs=262144 00:18:41.845 iodepth=64 00:18:41.845 norandommap=1 00:18:41.845 numjobs=1 00:18:41.845 00:18:41.845 [job0] 00:18:41.845 filename=/dev/nvme0n1 00:18:41.845 [job1] 00:18:41.845 filename=/dev/nvme10n1 00:18:41.845 [job2] 00:18:41.845 filename=/dev/nvme1n1 00:18:41.845 [job3] 00:18:41.845 filename=/dev/nvme2n1 00:18:41.845 [job4] 00:18:41.845 filename=/dev/nvme3n1 00:18:41.845 [job5] 00:18:41.845 filename=/dev/nvme4n1 00:18:41.845 [job6] 00:18:41.845 filename=/dev/nvme5n1 00:18:41.845 [job7] 00:18:41.845 filename=/dev/nvme6n1 00:18:41.845 [job8] 00:18:41.845 filename=/dev/nvme7n1 00:18:41.845 [job9] 00:18:41.845 filename=/dev/nvme8n1 00:18:41.845 [job10] 00:18:41.845 filename=/dev/nvme9n1 00:18:41.845 Could not set queue depth (nvme0n1) 00:18:41.845 Could not set queue depth (nvme10n1) 00:18:41.845 Could not set queue depth (nvme1n1) 00:18:41.845 Could not set queue depth (nvme2n1) 00:18:41.845 Could not set queue depth (nvme3n1) 00:18:41.845 Could not set queue depth (nvme4n1) 00:18:41.845 Could not set queue depth (nvme5n1) 00:18:41.845 Could not set queue depth (nvme6n1) 00:18:41.845 Could not set queue depth (nvme7n1) 00:18:41.845 Could not set queue depth (nvme8n1) 00:18:41.845 Could not set queue depth (nvme9n1) 00:18:42.102 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.102 fio-3.35 00:18:42.102 Starting 11 threads 00:18:54.313 00:18:54.313 job0: (groupid=0, jobs=1): err= 0: pid=77467: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=272, BW=68.1MiB/s (71.5MB/s)(689MiB/10104msec) 00:18:54.313 slat (usec): min=20, max=98552, avg=3628.36, stdev=8537.52 00:18:54.313 clat (msec): min=20, max=333, avg=230.72, stdev=32.28 00:18:54.313 lat (msec): min=23, max=333, avg=234.34, stdev=32.62 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 66], 5.00th=[ 199], 10.00th=[ 205], 20.00th=[ 213], 00:18:54.313 | 30.00th=[ 222], 40.00th=[ 228], 50.00th=[ 232], 60.00th=[ 239], 00:18:54.313 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 275], 00:18:54.313 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 334], 00:18:54.313 | 99.99th=[ 334] 00:18:54.313 bw ( KiB/s): min=56432, max=75624, per=13.00%, avg=68889.40, stdev=4591.17, samples=20 00:18:54.313 iops : min= 220, max= 295, avg=268.90, stdev=17.94, samples=20 00:18:54.313 lat (msec) : 50=0.73%, 100=0.69%, 250=78.00%, 500=20.59% 00:18:54.313 cpu : usr=0.19%, sys=1.23%, ctx=574, majf=0, minf=4097 00:18:54.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=2754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.313 job1: (groupid=0, jobs=1): err= 0: pid=77468: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=99, BW=24.8MiB/s (26.0MB/s)(252MiB/10158msec) 00:18:54.313 slat (usec): min=19, max=274526, avg=9299.86, stdev=26849.96 00:18:54.313 clat (msec): min=19, max=944, avg=634.08, stdev=185.18 00:18:54.313 lat (msec): min=20, max=944, avg=643.38, stdev=186.25 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 128], 5.00th=[ 192], 10.00th=[ 422], 20.00th=[ 531], 00:18:54.313 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 651], 60.00th=[ 693], 00:18:54.313 | 70.00th=[ 760], 80.00th=[ 810], 90.00th=[ 835], 95.00th=[ 877], 00:18:54.313 | 99.00th=[ 902], 99.50th=[ 911], 99.90th=[ 944], 99.95th=[ 944], 00:18:54.313 | 99.99th=[ 944] 00:18:54.313 bw ( KiB/s): min=15360, max=37450, per=4.57%, avg=24197.00, stdev=5327.03, samples=20 00:18:54.313 iops : min= 60, max= 146, avg=94.40, stdev=20.71, samples=20 00:18:54.313 lat (msec) : 20=0.10%, 250=7.33%, 500=9.32%, 750=51.64%, 1000=31.62% 00:18:54.313 cpu : usr=0.05%, sys=0.49%, ctx=187, majf=0, minf=4097 00:18:54.313 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.313 job2: (groupid=0, jobs=1): err= 0: pid=77469: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=101, BW=25.5MiB/s (26.7MB/s)(259MiB/10160msec) 00:18:54.313 slat (usec): min=20, max=215273, avg=9694.56, stdev=25864.20 00:18:54.313 clat (msec): min=36, max=880, avg=617.71, stdev=143.18 00:18:54.313 lat (msec): min=36, max=900, avg=627.40, stdev=144.12 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 38], 5.00th=[ 338], 10.00th=[ 477], 20.00th=[ 558], 00:18:54.313 | 30.00th=[ 592], 40.00th=[ 617], 50.00th=[ 634], 60.00th=[ 659], 00:18:54.313 | 70.00th=[ 684], 80.00th=[ 718], 90.00th=[ 760], 95.00th=[ 810], 00:18:54.313 | 99.00th=[ 860], 99.50th=[ 869], 99.90th=[ 877], 99.95th=[ 877], 00:18:54.313 | 99.99th=[ 877] 00:18:54.313 bw ( KiB/s): min=18981, max=34816, per=4.69%, avg=24863.70, stdev=3858.73, samples=20 00:18:54.313 iops : min= 74, max= 136, avg=97.00, stdev=15.08, samples=20 00:18:54.313 lat (msec) : 50=2.13%, 250=0.68%, 500=11.40%, 750=74.40%, 1000=11.40% 00:18:54.313 cpu : usr=0.06%, sys=0.48%, ctx=190, majf=0, minf=4097 00:18:54.313 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.313 job3: (groupid=0, jobs=1): err= 0: pid=77470: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=152, BW=38.0MiB/s (39.9MB/s)(385MiB/10124msec) 00:18:54.313 slat (usec): min=19, max=264096, avg=6495.09, stdev=20964.84 00:18:54.313 clat (msec): min=12, max=578, avg=413.70, stdev=85.82 00:18:54.313 lat (msec): min=13, max=720, avg=420.20, stdev=85.74 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 169], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 363], 00:18:54.313 | 30.00th=[ 384], 40.00th=[ 397], 50.00th=[ 414], 60.00th=[ 426], 00:18:54.313 | 70.00th=[ 447], 80.00th=[ 481], 90.00th=[ 535], 95.00th=[ 550], 00:18:54.313 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:18:54.313 | 99.99th=[ 575] 00:18:54.313 bw ( KiB/s): min=32768, max=44633, per=7.13%, avg=37790.50, stdev=4229.60, samples=20 00:18:54.313 iops : min= 128, max= 174, avg=147.50, stdev=16.52, samples=20 00:18:54.313 lat (msec) : 20=0.19%, 250=4.74%, 500=79.68%, 750=15.39% 00:18:54.313 cpu : usr=0.07%, sys=0.73%, ctx=286, majf=0, minf=4098 00:18:54.313 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.313 job4: (groupid=0, jobs=1): err= 0: pid=77471: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=106, BW=26.5MiB/s (27.8MB/s)(269MiB/10155msec) 00:18:54.313 slat (usec): min=20, max=178457, avg=9284.89, stdev=24119.71 00:18:54.313 clat (msec): min=79, max=853, avg=593.17, stdev=140.97 00:18:54.313 lat (msec): min=79, max=853, avg=602.45, stdev=143.18 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 93], 5.00th=[ 271], 10.00th=[ 347], 20.00th=[ 550], 00:18:54.313 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 651], 00:18:54.313 | 70.00th=[ 667], 80.00th=[ 684], 90.00th=[ 718], 95.00th=[ 735], 00:18:54.313 | 99.00th=[ 776], 99.50th=[ 818], 99.90th=[ 852], 99.95th=[ 852], 00:18:54.313 | 99.99th=[ 852] 00:18:54.313 bw ( KiB/s): min=19968, max=45146, per=4.90%, avg=25952.45, stdev=5644.97, samples=20 00:18:54.313 iops : min= 78, max= 176, avg=101.30, stdev=21.99, samples=20 00:18:54.313 lat (msec) : 100=1.39%, 250=2.23%, 500=9.38%, 750=83.94%, 1000=3.06% 00:18:54.313 cpu : usr=0.02%, sys=0.58%, ctx=241, majf=0, minf=4097 00:18:54.313 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=1077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.313 job5: (groupid=0, jobs=1): err= 0: pid=77472: Thu Dec 5 03:03:23 2024 00:18:54.313 read: IOPS=177, BW=44.4MiB/s (46.6MB/s)(450MiB/10121msec) 00:18:54.313 slat (usec): min=19, max=139933, avg=5559.41, stdev=14638.62 00:18:54.313 clat (msec): min=41, max=531, avg=354.26, stdev=75.20 00:18:54.313 lat (msec): min=41, max=553, avg=359.82, stdev=76.44 00:18:54.313 clat percentiles (msec): 00:18:54.313 | 1.00th=[ 70], 5.00th=[ 259], 10.00th=[ 271], 20.00th=[ 292], 00:18:54.313 | 30.00th=[ 326], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 384], 00:18:54.313 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 430], 95.00th=[ 451], 00:18:54.313 | 99.00th=[ 485], 99.50th=[ 498], 99.90th=[ 531], 99.95th=[ 531], 00:18:54.313 | 99.99th=[ 531] 00:18:54.313 bw ( KiB/s): min=36864, max=61563, per=8.38%, avg=44422.85, stdev=6978.94, samples=20 00:18:54.313 iops : min= 144, max= 240, avg=173.40, stdev=27.19, samples=20 00:18:54.313 lat (msec) : 50=0.44%, 100=1.33%, 250=2.50%, 500=95.27%, 750=0.44% 00:18:54.313 cpu : usr=0.10%, sys=0.81%, ctx=406, majf=0, minf=4097 00:18:54.313 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:18:54.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.313 issued rwts: total=1798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 job6: (groupid=0, jobs=1): err= 0: pid=77473: Thu Dec 5 03:03:23 2024 00:18:54.314 read: IOPS=103, BW=25.9MiB/s (27.1MB/s)(263MiB/10154msec) 00:18:54.314 slat (usec): min=19, max=276055, avg=9269.24, stdev=27436.79 00:18:54.314 clat (msec): min=18, max=882, avg=608.06, stdev=176.01 00:18:54.314 lat (msec): min=19, max=883, avg=617.33, stdev=177.92 00:18:54.314 clat percentiles (msec): 00:18:54.314 | 1.00th=[ 61], 5.00th=[ 138], 10.00th=[ 405], 20.00th=[ 531], 00:18:54.314 | 30.00th=[ 558], 40.00th=[ 609], 50.00th=[ 651], 60.00th=[ 684], 00:18:54.314 | 70.00th=[ 709], 80.00th=[ 735], 90.00th=[ 776], 95.00th=[ 802], 00:18:54.314 | 99.00th=[ 860], 99.50th=[ 860], 99.90th=[ 860], 99.95th=[ 885], 00:18:54.314 | 99.99th=[ 885] 00:18:54.314 bw ( KiB/s): min=18432, max=45568, per=4.77%, avg=25269.45, stdev=6571.29, samples=20 00:18:54.314 iops : min= 72, max= 178, avg=98.60, stdev=25.64, samples=20 00:18:54.314 lat (msec) : 20=0.19%, 100=4.66%, 250=1.43%, 500=7.90%, 750=70.22% 00:18:54.314 lat (msec) : 1000=15.60% 00:18:54.314 cpu : usr=0.02%, sys=0.49%, ctx=192, majf=0, minf=4097 00:18:54.314 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:18:54.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.314 issued rwts: total=1051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 job7: (groupid=0, jobs=1): err= 0: pid=77474: Thu Dec 5 03:03:23 2024 00:18:54.314 read: IOPS=320, BW=80.2MiB/s (84.1MB/s)(810MiB/10088msec) 00:18:54.314 slat (usec): min=21, max=174645, avg=3089.98, stdev=8415.74 00:18:54.314 clat (msec): min=23, max=478, avg=195.96, stdev=37.83 00:18:54.314 lat (msec): min=25, max=478, avg=199.05, stdev=38.15 00:18:54.314 clat percentiles (msec): 00:18:54.314 | 1.00th=[ 128], 5.00th=[ 161], 10.00th=[ 171], 20.00th=[ 180], 00:18:54.314 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 197], 00:18:54.314 | 70.00th=[ 199], 80.00th=[ 205], 90.00th=[ 218], 95.00th=[ 236], 00:18:54.314 | 99.00th=[ 414], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 481], 00:18:54.314 | 99.99th=[ 481] 00:18:54.314 bw ( KiB/s): min=35398, max=92672, per=15.33%, avg=81269.25, stdev=12706.43, samples=20 00:18:54.314 iops : min= 138, max= 362, avg=317.30, stdev=49.70, samples=20 00:18:54.314 lat (msec) : 50=0.19%, 100=0.28%, 250=95.83%, 500=3.71% 00:18:54.314 cpu : usr=0.13%, sys=1.50%, ctx=631, majf=0, minf=4097 00:18:54.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:54.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.314 issued rwts: total=3238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 job8: (groupid=0, jobs=1): err= 0: pid=77475: Thu Dec 5 03:03:23 2024 00:18:54.314 read: IOPS=150, BW=37.6MiB/s (39.5MB/s)(381MiB/10124msec) 00:18:54.314 slat (usec): min=20, max=409243, avg=6560.90, stdev=21523.20 00:18:54.314 clat (msec): min=116, max=578, avg=418.06, stdev=77.02 00:18:54.314 lat (msec): min=128, max=674, avg=424.62, stdev=76.12 00:18:54.314 clat percentiles (msec): 00:18:54.314 | 1.00th=[ 194], 5.00th=[ 292], 10.00th=[ 313], 20.00th=[ 359], 00:18:54.314 | 30.00th=[ 384], 40.00th=[ 397], 50.00th=[ 418], 60.00th=[ 439], 00:18:54.314 | 70.00th=[ 472], 80.00th=[ 489], 90.00th=[ 514], 95.00th=[ 527], 00:18:54.314 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:18:54.314 | 99.99th=[ 575] 00:18:54.314 bw ( KiB/s): min=16416, max=48640, per=7.06%, avg=37393.55, stdev=7264.71, samples=20 00:18:54.314 iops : min= 64, max= 190, avg=145.95, stdev=28.40, samples=20 00:18:54.314 lat (msec) : 250=1.51%, 500=84.06%, 750=14.44% 00:18:54.314 cpu : usr=0.09%, sys=0.66%, ctx=298, majf=0, minf=4097 00:18:54.314 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:18:54.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.314 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 job9: (groupid=0, jobs=1): err= 0: pid=77476: Thu Dec 5 03:03:23 2024 00:18:54.314 read: IOPS=274, BW=68.5MiB/s (71.9MB/s)(693MiB/10106msec) 00:18:54.314 slat (usec): min=19, max=78105, avg=3603.62, stdev=8323.80 00:18:54.314 clat (msec): min=19, max=339, avg=229.40, stdev=33.71 00:18:54.314 lat (msec): min=20, max=339, avg=233.00, stdev=34.05 00:18:54.314 clat percentiles (msec): 00:18:54.314 | 1.00th=[ 91], 5.00th=[ 190], 10.00th=[ 203], 20.00th=[ 213], 00:18:54.314 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 236], 00:18:54.314 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 275], 00:18:54.314 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 326], 99.95th=[ 338], 00:18:54.314 | 99.99th=[ 338] 00:18:54.314 bw ( KiB/s): min=59784, max=78848, per=13.08%, avg=69332.00, stdev=4434.19, samples=20 00:18:54.314 iops : min= 233, max= 308, avg=270.70, stdev=17.37, samples=20 00:18:54.314 lat (msec) : 20=0.04%, 50=0.87%, 100=0.51%, 250=77.88%, 500=20.71% 00:18:54.314 cpu : usr=0.19%, sys=1.28%, ctx=577, majf=0, minf=4097 00:18:54.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:54.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.314 issued rwts: total=2771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 job10: (groupid=0, jobs=1): err= 0: pid=77477: Thu Dec 5 03:03:23 2024 00:18:54.314 read: IOPS=320, BW=80.2MiB/s (84.1MB/s)(809MiB/10087msec) 00:18:54.314 slat (usec): min=18, max=167415, avg=3088.72, stdev=8531.38 00:18:54.314 clat (msec): min=18, max=456, avg=196.10, stdev=36.26 00:18:54.314 lat (msec): min=19, max=525, avg=199.19, stdev=36.62 00:18:54.314 clat percentiles (msec): 00:18:54.314 | 1.00th=[ 130], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 180], 00:18:54.314 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:18:54.314 | 70.00th=[ 199], 80.00th=[ 205], 90.00th=[ 218], 95.00th=[ 234], 00:18:54.314 | 99.00th=[ 414], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:18:54.314 | 99.99th=[ 456] 00:18:54.314 bw ( KiB/s): min=33792, max=88910, per=15.32%, avg=81211.50, stdev=11641.24, samples=20 00:18:54.314 iops : min= 132, max= 347, avg=317.10, stdev=45.46, samples=20 00:18:54.314 lat (msec) : 20=0.06%, 50=0.03%, 100=0.19%, 250=96.17%, 500=3.55% 00:18:54.314 cpu : usr=0.13%, sys=1.50%, ctx=671, majf=0, minf=4097 00:18:54.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:54.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.314 issued rwts: total=3236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.314 00:18:54.314 Run status group 0 (all jobs): 00:18:54.314 READ: bw=518MiB/s (543MB/s), 24.8MiB/s-80.2MiB/s (26.0MB/s-84.1MB/s), io=5258MiB (5514MB), run=10087-10160msec 00:18:54.314 00:18:54.314 Disk stats (read/write): 00:18:54.314 nvme0n1: ios=5384/0, merge=0/0, ticks=1228604/0, in_queue=1228604, util=97.70% 00:18:54.314 nvme10n1: ios=1890/0, merge=0/0, ticks=1200717/0, in_queue=1200717, util=97.88% 00:18:54.314 nvme1n1: ios=1942/0, merge=0/0, ticks=1206225/0, in_queue=1206225, util=98.06% 00:18:54.314 nvme2n1: ios=2959/0, merge=0/0, ticks=1218496/0, in_queue=1218496, util=98.20% 00:18:54.314 nvme3n1: ios=2031/0, merge=0/0, ticks=1212234/0, in_queue=1212234, util=98.11% 00:18:54.314 nvme4n1: ios=3473/0, merge=0/0, ticks=1217001/0, in_queue=1217001, util=98.37% 00:18:54.314 nvme5n1: ios=1982/0, merge=0/0, ticks=1200490/0, in_queue=1200490, util=98.54% 00:18:54.314 nvme6n1: ios=6349/0, merge=0/0, ticks=1233056/0, in_queue=1233056, util=98.63% 00:18:54.314 nvme7n1: ios=2920/0, merge=0/0, ticks=1221522/0, in_queue=1221522, util=98.90% 00:18:54.314 nvme8n1: ios=5424/0, merge=0/0, ticks=1230185/0, in_queue=1230185, util=99.08% 00:18:54.314 nvme9n1: ios=6344/0, merge=0/0, ticks=1230412/0, in_queue=1230412, util=99.17% 00:18:54.314 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:54.314 [global] 00:18:54.314 thread=1 00:18:54.314 invalidate=1 00:18:54.314 rw=randwrite 00:18:54.314 time_based=1 00:18:54.314 runtime=10 00:18:54.314 ioengine=libaio 00:18:54.314 direct=1 00:18:54.314 bs=262144 00:18:54.314 iodepth=64 00:18:54.314 norandommap=1 00:18:54.314 numjobs=1 00:18:54.314 00:18:54.314 [job0] 00:18:54.314 filename=/dev/nvme0n1 00:18:54.314 [job1] 00:18:54.314 filename=/dev/nvme10n1 00:18:54.314 [job2] 00:18:54.314 filename=/dev/nvme1n1 00:18:54.314 [job3] 00:18:54.314 filename=/dev/nvme2n1 00:18:54.314 [job4] 00:18:54.314 filename=/dev/nvme3n1 00:18:54.314 [job5] 00:18:54.314 filename=/dev/nvme4n1 00:18:54.314 [job6] 00:18:54.314 filename=/dev/nvme5n1 00:18:54.314 [job7] 00:18:54.314 filename=/dev/nvme6n1 00:18:54.314 [job8] 00:18:54.314 filename=/dev/nvme7n1 00:18:54.314 [job9] 00:18:54.314 filename=/dev/nvme8n1 00:18:54.314 [job10] 00:18:54.314 filename=/dev/nvme9n1 00:18:54.314 Could not set queue depth (nvme0n1) 00:18:54.314 Could not set queue depth (nvme10n1) 00:18:54.314 Could not set queue depth (nvme1n1) 00:18:54.314 Could not set queue depth (nvme2n1) 00:18:54.314 Could not set queue depth (nvme3n1) 00:18:54.314 Could not set queue depth (nvme4n1) 00:18:54.314 Could not set queue depth (nvme5n1) 00:18:54.314 Could not set queue depth (nvme6n1) 00:18:54.314 Could not set queue depth (nvme7n1) 00:18:54.314 Could not set queue depth (nvme8n1) 00:18:54.314 Could not set queue depth (nvme9n1) 00:18:54.314 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.314 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.314 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.314 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.315 fio-3.35 00:18:54.315 Starting 11 threads 00:19:04.295 00:19:04.295 job0: (groupid=0, jobs=1): err= 0: pid=77673: Thu Dec 5 03:03:34 2024 00:19:04.295 write: IOPS=292, BW=73.2MiB/s (76.7MB/s)(744MiB/10172msec); 0 zone resets 00:19:04.295 slat (usec): min=16, max=20885, avg=3244.71, stdev=6025.04 00:19:04.295 clat (msec): min=9, max=399, avg=215.32, stdev=65.44 00:19:04.295 lat (msec): min=9, max=399, avg=218.56, stdev=66.05 00:19:04.295 clat percentiles (msec): 00:19:04.295 | 1.00th=[ 41], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 226], 00:19:04.295 | 30.00th=[ 230], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:19:04.295 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 251], 00:19:04.295 | 99.00th=[ 338], 99.50th=[ 368], 99.90th=[ 384], 99.95th=[ 401], 00:19:04.295 | 99.99th=[ 401] 00:19:04.295 bw ( KiB/s): min=49664, max=225280, per=11.74%, avg=74591.65, stdev=35712.64, samples=20 00:19:04.295 iops : min= 194, max= 880, avg=291.35, stdev=139.51, samples=20 00:19:04.295 lat (msec) : 10=0.03%, 20=0.40%, 50=0.81%, 100=13.54%, 250=80.69% 00:19:04.295 lat (msec) : 500=4.53% 00:19:04.295 cpu : usr=0.47%, sys=0.93%, ctx=3365, majf=0, minf=1 00:19:04.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:04.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.295 issued rwts: total=0,2977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.295 job1: (groupid=0, jobs=1): err= 0: pid=77674: Thu Dec 5 03:03:34 2024 00:19:04.295 write: IOPS=265, BW=66.3MiB/s (69.6MB/s)(675MiB/10174msec); 0 zone resets 00:19:04.295 slat (usec): min=17, max=47613, avg=3697.98, stdev=6518.21 00:19:04.296 clat (msec): min=22, max=410, avg=237.37, stdev=31.98 00:19:04.296 lat (msec): min=22, max=410, avg=241.07, stdev=31.85 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 90], 5.00th=[ 201], 10.00th=[ 226], 20.00th=[ 230], 00:19:04.296 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:19:04.296 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 271], 00:19:04.296 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 397], 99.95th=[ 409], 00:19:04.296 | 99.99th=[ 409] 00:19:04.296 bw ( KiB/s): min=57344, max=77979, per=10.63%, avg=67508.20, stdev=3494.01, samples=20 00:19:04.296 iops : min= 224, max= 304, avg=263.65, stdev=13.55, samples=20 00:19:04.296 lat (msec) : 50=0.44%, 100=0.74%, 250=91.89%, 500=6.93% 00:19:04.296 cpu : usr=0.42%, sys=0.89%, ctx=2761, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,2700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job2: (groupid=0, jobs=1): err= 0: pid=77686: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=142, BW=35.6MiB/s (37.4MB/s)(366MiB/10265msec); 0 zone resets 00:19:04.296 slat (usec): min=16, max=156123, avg=6845.29, stdev=12684.50 00:19:04.296 clat (msec): min=157, max=683, avg=441.96, stdev=44.75 00:19:04.296 lat (msec): min=157, max=683, avg=448.81, stdev=44.01 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 222], 5.00th=[ 397], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.296 | 30.00th=[ 443], 40.00th=[ 447], 50.00th=[ 451], 60.00th=[ 451], 00:19:04.296 | 70.00th=[ 456], 80.00th=[ 456], 90.00th=[ 468], 95.00th=[ 472], 00:19:04.296 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 684], 99.95th=[ 684], 00:19:04.296 | 99.99th=[ 684] 00:19:04.296 bw ( KiB/s): min=30208, max=38912, per=5.64%, avg=35803.70, stdev=1898.90, samples=20 00:19:04.296 iops : min= 118, max= 152, avg=139.75, stdev= 7.45, samples=20 00:19:04.296 lat (msec) : 250=1.37%, 500=96.58%, 750=2.05% 00:19:04.296 cpu : usr=0.23%, sys=0.47%, ctx=1540, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,1463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job3: (groupid=0, jobs=1): err= 0: pid=77687: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=143, BW=35.8MiB/s (37.5MB/s)(368MiB/10272msec); 0 zone resets 00:19:04.296 slat (usec): min=17, max=121581, avg=6693.44, stdev=12240.15 00:19:04.296 clat (msec): min=123, max=701, avg=439.95, stdev=49.76 00:19:04.296 lat (msec): min=123, max=701, avg=446.64, stdev=49.38 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 188], 5.00th=[ 393], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.296 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 447], 60.00th=[ 451], 00:19:04.296 | 70.00th=[ 456], 80.00th=[ 456], 90.00th=[ 464], 95.00th=[ 468], 00:19:04.296 | 99.00th=[ 592], 99.50th=[ 642], 99.90th=[ 701], 99.95th=[ 701], 00:19:04.296 | 99.99th=[ 701] 00:19:04.296 bw ( KiB/s): min=34304, max=36864, per=5.67%, avg=36041.30, stdev=1047.14, samples=20 00:19:04.296 iops : min= 134, max= 144, avg=140.75, stdev= 4.14, samples=20 00:19:04.296 lat (msec) : 250=1.63%, 500=96.33%, 750=2.04% 00:19:04.296 cpu : usr=0.24%, sys=0.45%, ctx=1409, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,1471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job4: (groupid=0, jobs=1): err= 0: pid=77688: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=144, BW=36.2MiB/s (37.9MB/s)(372MiB/10281msec); 0 zone resets 00:19:04.296 slat (usec): min=17, max=114350, avg=6718.04, stdev=12186.91 00:19:04.296 clat (msec): min=14, max=718, avg=435.22, stdev=67.75 00:19:04.296 lat (msec): min=14, max=718, avg=441.94, stdev=67.91 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 77], 5.00th=[ 380], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.296 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 447], 60.00th=[ 451], 00:19:04.296 | 70.00th=[ 451], 80.00th=[ 456], 90.00th=[ 460], 95.00th=[ 464], 00:19:04.296 | 99.00th=[ 609], 99.50th=[ 659], 99.90th=[ 718], 99.95th=[ 718], 00:19:04.296 | 99.99th=[ 718] 00:19:04.296 bw ( KiB/s): min=34816, max=38912, per=5.74%, avg=36476.30, stdev=1034.41, samples=20 00:19:04.296 iops : min= 136, max= 152, avg=142.45, stdev= 4.03, samples=20 00:19:04.296 lat (msec) : 20=0.27%, 50=0.27%, 100=0.81%, 250=1.88%, 500=94.35% 00:19:04.296 lat (msec) : 750=2.42% 00:19:04.296 cpu : usr=0.32%, sys=0.44%, ctx=495, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.8% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,1488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job5: (groupid=0, jobs=1): err= 0: pid=77689: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=144, BW=36.0MiB/s (37.8MB/s)(370MiB/10274msec); 0 zone resets 00:19:04.296 slat (usec): min=20, max=58138, avg=6758.02, stdev=12109.75 00:19:04.296 clat (msec): min=40, max=707, avg=437.28, stdev=64.31 00:19:04.296 lat (msec): min=40, max=707, avg=444.04, stdev=64.39 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 101], 5.00th=[ 347], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.296 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 451], 60.00th=[ 451], 00:19:04.296 | 70.00th=[ 456], 80.00th=[ 460], 90.00th=[ 468], 95.00th=[ 472], 00:19:04.296 | 99.00th=[ 600], 99.50th=[ 651], 99.90th=[ 709], 99.95th=[ 709], 00:19:04.296 | 99.99th=[ 709] 00:19:04.296 bw ( KiB/s): min=34746, max=40960, per=5.71%, avg=36271.70, stdev=1482.74, samples=20 00:19:04.296 iops : min= 135, max= 160, avg=141.65, stdev= 5.83, samples=20 00:19:04.296 lat (msec) : 50=0.27%, 100=0.61%, 250=2.09%, 500=95.00%, 750=2.03% 00:19:04.296 cpu : usr=0.21%, sys=0.48%, ctx=1633, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,1480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job6: (groupid=0, jobs=1): err= 0: pid=77690: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=265, BW=66.3MiB/s (69.5MB/s)(674MiB/10176msec); 0 zone resets 00:19:04.296 slat (usec): min=17, max=82796, avg=3703.40, stdev=6606.44 00:19:04.296 clat (msec): min=20, max=410, avg=237.67, stdev=28.04 00:19:04.296 lat (msec): min=20, max=410, avg=241.38, stdev=27.78 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 91], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 230], 00:19:04.296 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:19:04.296 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 255], 00:19:04.296 | 99.00th=[ 305], 99.50th=[ 363], 99.90th=[ 397], 99.95th=[ 409], 00:19:04.296 | 99.99th=[ 409] 00:19:04.296 bw ( KiB/s): min=61440, max=71680, per=10.61%, avg=67423.65, stdev=1850.18, samples=20 00:19:04.296 iops : min= 240, max= 280, avg=263.35, stdev= 7.23, samples=20 00:19:04.296 lat (msec) : 50=0.44%, 100=0.74%, 250=92.40%, 500=6.41% 00:19:04.296 cpu : usr=0.47%, sys=0.81%, ctx=2644, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,2697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job7: (groupid=0, jobs=1): err= 0: pid=77691: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=420, BW=105MiB/s (110MB/s)(1064MiB/10127msec); 0 zone resets 00:19:04.296 slat (usec): min=16, max=11935, avg=2329.58, stdev=4111.03 00:19:04.296 clat (msec): min=9, max=278, avg=149.90, stdev=29.62 00:19:04.296 lat (msec): min=9, max=279, avg=152.23, stdev=29.85 00:19:04.296 clat percentiles (msec): 00:19:04.296 | 1.00th=[ 63], 5.00th=[ 68], 10.00th=[ 97], 20.00th=[ 150], 00:19:04.296 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:19:04.296 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 167], 00:19:04.296 | 99.00th=[ 169], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 268], 00:19:04.296 | 99.99th=[ 279] 00:19:04.296 bw ( KiB/s): min=98816, max=214528, per=16.90%, avg=107330.50, stdev=25314.83, samples=20 00:19:04.296 iops : min= 386, max= 838, avg=419.25, stdev=98.89, samples=20 00:19:04.296 lat (msec) : 10=0.12%, 20=0.07%, 50=0.54%, 100=9.42%, 250=89.61% 00:19:04.296 lat (msec) : 500=0.23% 00:19:04.296 cpu : usr=0.78%, sys=1.20%, ctx=4890, majf=0, minf=1 00:19:04.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:04.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.296 issued rwts: total=0,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.296 job8: (groupid=0, jobs=1): err= 0: pid=77692: Thu Dec 5 03:03:34 2024 00:19:04.296 write: IOPS=141, BW=35.5MiB/s (37.2MB/s)(364MiB/10262msec); 0 zone resets 00:19:04.297 slat (usec): min=16, max=239135, avg=6868.24, stdev=13434.46 00:19:04.297 clat (msec): min=240, max=695, avg=443.96, stdev=39.11 00:19:04.297 lat (msec): min=241, max=695, avg=450.82, stdev=37.76 00:19:04.297 clat percentiles (msec): 00:19:04.297 | 1.00th=[ 284], 5.00th=[ 414], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.297 | 30.00th=[ 443], 40.00th=[ 447], 50.00th=[ 447], 60.00th=[ 451], 00:19:04.297 | 70.00th=[ 451], 80.00th=[ 456], 90.00th=[ 460], 95.00th=[ 477], 00:19:04.297 | 99.00th=[ 609], 99.50th=[ 642], 99.90th=[ 693], 99.95th=[ 693], 00:19:04.297 | 99.99th=[ 693] 00:19:04.297 bw ( KiB/s): min=22528, max=38400, per=5.61%, avg=35649.70, stdev=3243.63, samples=20 00:19:04.297 iops : min= 88, max= 150, avg=139.15, stdev=12.63, samples=20 00:19:04.297 lat (msec) : 250=0.27%, 500=96.36%, 750=3.37% 00:19:04.297 cpu : usr=0.29%, sys=0.46%, ctx=1414, majf=0, minf=1 00:19:04.297 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:19:04.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.297 issued rwts: total=0,1456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.297 job9: (groupid=0, jobs=1): err= 0: pid=77693: Thu Dec 5 03:03:34 2024 00:19:04.297 write: IOPS=143, BW=36.0MiB/s (37.7MB/s)(370MiB/10275msec); 0 zone resets 00:19:04.297 slat (usec): min=16, max=170967, avg=6621.83, stdev=12510.52 00:19:04.297 clat (msec): min=18, max=714, avg=437.98, stdev=61.57 00:19:04.297 lat (msec): min=18, max=714, avg=444.61, stdev=61.41 00:19:04.297 clat percentiles (msec): 00:19:04.297 | 1.00th=[ 84], 5.00th=[ 405], 10.00th=[ 418], 20.00th=[ 426], 00:19:04.297 | 30.00th=[ 439], 40.00th=[ 443], 50.00th=[ 447], 60.00th=[ 447], 00:19:04.297 | 70.00th=[ 451], 80.00th=[ 456], 90.00th=[ 456], 95.00th=[ 464], 00:19:04.297 | 99.00th=[ 600], 99.50th=[ 659], 99.90th=[ 718], 99.95th=[ 718], 00:19:04.297 | 99.99th=[ 718] 00:19:04.297 bw ( KiB/s): min=31294, max=38912, per=5.70%, avg=36223.40, stdev=1527.28, samples=20 00:19:04.297 iops : min= 122, max= 152, avg=141.45, stdev= 6.00, samples=20 00:19:04.297 lat (msec) : 20=0.27%, 50=0.27%, 100=0.54%, 250=0.95%, 500=93.98% 00:19:04.297 lat (msec) : 750=3.99% 00:19:04.297 cpu : usr=0.23%, sys=0.49%, ctx=1587, majf=0, minf=1 00:19:04.297 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:19:04.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.297 issued rwts: total=0,1478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.297 job10: (groupid=0, jobs=1): err= 0: pid=77694: Thu Dec 5 03:03:34 2024 00:19:04.297 write: IOPS=399, BW=99.8MiB/s (105MB/s)(1011MiB/10126msec); 0 zone resets 00:19:04.297 slat (usec): min=17, max=97799, avg=2387.44, stdev=4479.66 00:19:04.297 clat (msec): min=5, max=359, avg=157.85, stdev=32.09 00:19:04.297 lat (msec): min=7, max=362, avg=160.24, stdev=32.32 00:19:04.297 clat percentiles (msec): 00:19:04.297 | 1.00th=[ 32], 5.00th=[ 112], 10.00th=[ 150], 20.00th=[ 153], 00:19:04.297 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:19:04.297 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 167], 00:19:04.297 | 99.00th=[ 284], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 355], 00:19:04.297 | 99.99th=[ 359] 00:19:04.297 bw ( KiB/s): min=83456, max=130048, per=16.03%, avg=101847.00, stdev=7857.76, samples=20 00:19:04.297 iops : min= 326, max= 508, avg=397.80, stdev=30.69, samples=20 00:19:04.297 lat (msec) : 10=0.10%, 20=0.49%, 50=1.26%, 100=2.84%, 250=93.10% 00:19:04.297 lat (msec) : 500=2.20% 00:19:04.297 cpu : usr=0.61%, sys=1.31%, ctx=4900, majf=0, minf=1 00:19:04.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:04.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.297 issued rwts: total=0,4043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.297 00:19:04.297 Run status group 0 (all jobs): 00:19:04.297 WRITE: bw=620MiB/s (650MB/s), 35.5MiB/s-105MiB/s (37.2MB/s-110MB/s), io=6377MiB (6687MB), run=10126-10281msec 00:19:04.297 00:19:04.297 Disk stats (read/write): 00:19:04.297 nvme0n1: ios=49/5814, merge=0/0, ticks=38/1207678, in_queue=1207716, util=97.72% 00:19:04.297 nvme10n1: ios=49/5269, merge=0/0, ticks=48/1207276, in_queue=1207324, util=97.96% 00:19:04.297 nvme1n1: ios=45/2885, merge=0/0, ticks=47/1232312, in_queue=1232359, util=97.94% 00:19:04.297 nvme2n1: ios=28/2907, merge=0/0, ticks=31/1234089, in_queue=1234120, util=98.00% 00:19:04.297 nvme3n1: ios=22/2948, merge=0/0, ticks=41/1235177, in_queue=1235218, util=98.19% 00:19:04.297 nvme4n1: ios=0/2926, merge=0/0, ticks=0/1233505, in_queue=1233505, util=98.24% 00:19:04.297 nvme5n1: ios=0/5262, merge=0/0, ticks=0/1207684, in_queue=1207684, util=98.47% 00:19:04.297 nvme6n1: ios=0/8355, merge=0/0, ticks=0/1209106, in_queue=1209106, util=98.34% 00:19:04.297 nvme7n1: ios=0/2875, merge=0/0, ticks=0/1232455, in_queue=1232455, util=98.57% 00:19:04.297 nvme8n1: ios=0/2926, merge=0/0, ticks=0/1234296, in_queue=1234296, util=98.87% 00:19:04.297 nvme9n1: ios=0/7927, merge=0/0, ticks=0/1210404, in_queue=1210404, util=98.80% 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:04.297 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:04.297 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:04.297 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.297 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:04.298 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.298 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:04.556 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.556 rmmod nvme_tcp 00:19:04.556 rmmod nvme_fabrics 00:19:04.556 rmmod nvme_keyring 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 77006 ']' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 77006 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 77006 ']' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 77006 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77006 00:19:04.556 killing process with pid 77006 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77006' 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 77006 00:19:04.556 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 77006 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:07.090 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.349 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:07.349 00:19:07.349 real 0m52.258s 00:19:07.349 user 3m2.269s 00:19:07.349 sys 0m22.994s 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:07.349 ************************************ 00:19:07.349 END TEST nvmf_multiconnection 00:19:07.349 ************************************ 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:07.349 03:03:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.350 ************************************ 00:19:07.350 START TEST nvmf_initiator_timeout 00:19:07.350 ************************************ 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:07.350 * Looking for test storage... 00:19:07.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.350 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.609 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.609 --rc genhtml_branch_coverage=1 00:19:07.609 --rc genhtml_function_coverage=1 00:19:07.609 --rc genhtml_legend=1 00:19:07.609 --rc geninfo_all_blocks=1 00:19:07.609 --rc geninfo_unexecuted_blocks=1 00:19:07.609 00:19:07.609 ' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.610 --rc genhtml_branch_coverage=1 00:19:07.610 --rc genhtml_function_coverage=1 00:19:07.610 --rc genhtml_legend=1 00:19:07.610 --rc geninfo_all_blocks=1 00:19:07.610 --rc geninfo_unexecuted_blocks=1 00:19:07.610 00:19:07.610 ' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.610 --rc genhtml_branch_coverage=1 00:19:07.610 --rc genhtml_function_coverage=1 00:19:07.610 --rc genhtml_legend=1 00:19:07.610 --rc geninfo_all_blocks=1 00:19:07.610 --rc geninfo_unexecuted_blocks=1 00:19:07.610 00:19:07.610 ' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.610 --rc genhtml_branch_coverage=1 00:19:07.610 --rc genhtml_function_coverage=1 00:19:07.610 --rc genhtml_legend=1 00:19:07.610 --rc geninfo_all_blocks=1 00:19:07.610 --rc geninfo_unexecuted_blocks=1 00:19:07.610 00:19:07.610 ' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.610 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:07.611 Cannot find device "nvmf_init_br" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:07.611 Cannot find device "nvmf_init_br2" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:07.611 Cannot find device "nvmf_tgt_br" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.611 Cannot find device "nvmf_tgt_br2" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:07.611 Cannot find device "nvmf_init_br" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:07.611 Cannot find device "nvmf_init_br2" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:07.611 Cannot find device "nvmf_tgt_br" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:07.611 Cannot find device "nvmf_tgt_br2" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:07.611 Cannot find device "nvmf_br" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:07.611 Cannot find device "nvmf_init_if" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:07.611 Cannot find device "nvmf_init_if2" 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.611 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.870 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:19:07.871 00:19:07.871 --- 10.0.0.3 ping statistics --- 00:19:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.871 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.871 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.871 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:19:07.871 00:19:07.871 --- 10.0.0.4 ping statistics --- 00:19:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.871 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:19:07.871 00:19:07.871 --- 10.0.0.1 ping statistics --- 00:19:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.871 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:07.871 00:19:07.871 --- 10.0.0.2 ping statistics --- 00:19:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.871 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78133 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78133 00:19:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 78133 ']' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.871 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.130 [2024-12-05 03:03:38.817154] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:08.130 [2024-12-05 03:03:38.817329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.389 [2024-12-05 03:03:39.001859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.389 [2024-12-05 03:03:39.086572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.389 [2024-12-05 03:03:39.086652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.389 [2024-12-05 03:03:39.086689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.389 [2024-12-05 03:03:39.086701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.389 [2024-12-05 03:03:39.086713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.389 [2024-12-05 03:03:39.088602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.389 [2024-12-05 03:03:39.088718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.389 [2024-12-05 03:03:39.088948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.389 [2024-12-05 03:03:39.089468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.648 [2024-12-05 03:03:39.261252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 Malloc0 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 Delay0 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 [2024-12-05 03:03:39.903268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.218 [2024-12-05 03:03:39.935623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.218 03:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:09.478 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:09.478 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.478 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.478 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:09.478 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78197 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:11.393 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:11.393 [global] 00:19:11.393 thread=1 00:19:11.393 invalidate=1 00:19:11.393 rw=write 00:19:11.393 time_based=1 00:19:11.393 runtime=60 00:19:11.393 ioengine=libaio 00:19:11.393 direct=1 00:19:11.393 bs=4096 00:19:11.393 iodepth=1 00:19:11.393 norandommap=0 00:19:11.393 numjobs=1 00:19:11.393 00:19:11.393 verify_dump=1 00:19:11.393 verify_backlog=512 00:19:11.393 verify_state_save=0 00:19:11.393 do_verify=1 00:19:11.393 verify=crc32c-intel 00:19:11.393 [job0] 00:19:11.393 filename=/dev/nvme0n1 00:19:11.393 Could not set queue depth (nvme0n1) 00:19:11.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.664 fio-3.35 00:19:11.664 Starting 1 thread 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 true 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 true 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 true 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 true 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.951 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.479 true 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.479 true 00:19:17.479 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.480 true 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.480 true 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:17.480 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78197 00:20:13.729 00:20:13.729 job0: (groupid=0, jobs=1): err= 0: pid=78218: Thu Dec 5 03:04:42 2024 00:20:13.729 read: IOPS=700, BW=2802KiB/s (2869kB/s)(164MiB/60000msec) 00:20:13.729 slat (usec): min=10, max=7527, avg=14.16, stdev=45.35 00:20:13.729 clat (usec): min=156, max=40638k, avg=1200.16, stdev=198221.25 00:20:13.729 lat (usec): min=195, max=40638k, avg=1214.31, stdev=198221.27 00:20:13.729 clat percentiles (usec): 00:20:13.729 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 212], 00:20:13.729 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:20:13.730 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:20:13.730 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 570], 99.95th=[ 660], 00:20:13.730 | 99.99th=[ 955] 00:20:13.730 write: IOPS=708, BW=2833KiB/s (2901kB/s)(166MiB/60000msec); 0 zone resets 00:20:13.730 slat (usec): min=12, max=703, avg=20.37, stdev= 7.49 00:20:13.730 clat (usec): min=123, max=2176, avg=187.29, stdev=29.91 00:20:13.730 lat (usec): min=155, max=2195, avg=207.67, stdev=31.37 00:20:13.730 clat percentiles (usec): 00:20:13.730 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:20:13.730 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:20:13.730 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 225], 00:20:13.730 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 510], 99.95th=[ 594], 00:20:13.730 | 99.99th=[ 1012] 00:20:13.730 bw ( KiB/s): min= 4512, max= 9704, per=100.00%, avg=8730.95, stdev=880.84, samples=38 00:20:13.730 iops : min= 1128, max= 2426, avg=2182.74, stdev=220.21, samples=38 00:20:13.730 lat (usec) : 250=89.27%, 500=10.58%, 750=0.12%, 1000=0.02% 00:20:13.730 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:13.730 cpu : usr=0.51%, sys=1.90%, ctx=84540, majf=0, minf=5 00:20:13.730 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.730 issued rwts: total=42030,42496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.730 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.730 00:20:13.730 Run status group 0 (all jobs): 00:20:13.730 READ: bw=2802KiB/s (2869kB/s), 2802KiB/s-2802KiB/s (2869kB/s-2869kB/s), io=164MiB (172MB), run=60000-60000msec 00:20:13.730 WRITE: bw=2833KiB/s (2901kB/s), 2833KiB/s-2833KiB/s (2901kB/s-2901kB/s), io=166MiB (174MB), run=60000-60000msec 00:20:13.730 00:20:13.730 Disk stats (read/write): 00:20:13.730 nvme0n1: ios=42181/42055, merge=0/0, ticks=10256/8357, in_queue=18613, util=99.54% 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:13.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.730 nvmf hotplug test: fio successful as expected 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.730 rmmod nvme_tcp 00:20:13.730 rmmod nvme_fabrics 00:20:13.730 rmmod nvme_keyring 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78133 ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78133 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 78133 ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 78133 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78133 00:20:13.730 killing process with pid 78133 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78133' 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 78133 00:20:13.730 03:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 78133 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:13.730 00:20:13.730 real 1m5.771s 00:20:13.730 user 3m55.595s 00:20:13.730 sys 0m21.794s 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.730 ************************************ 00:20:13.730 END TEST nvmf_initiator_timeout 00:20:13.730 ************************************ 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.730 ************************************ 00:20:13.730 START TEST nvmf_nsid 00:20:13.730 ************************************ 00:20:13.730 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:13.730 * Looking for test storage... 00:20:13.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:13.731 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.731 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.731 03:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.731 --rc genhtml_branch_coverage=1 00:20:13.731 --rc genhtml_function_coverage=1 00:20:13.731 --rc genhtml_legend=1 00:20:13.731 --rc geninfo_all_blocks=1 00:20:13.731 --rc geninfo_unexecuted_blocks=1 00:20:13.731 00:20:13.731 ' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.731 --rc genhtml_branch_coverage=1 00:20:13.731 --rc genhtml_function_coverage=1 00:20:13.731 --rc genhtml_legend=1 00:20:13.731 --rc geninfo_all_blocks=1 00:20:13.731 --rc geninfo_unexecuted_blocks=1 00:20:13.731 00:20:13.731 ' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.731 --rc genhtml_branch_coverage=1 00:20:13.731 --rc genhtml_function_coverage=1 00:20:13.731 --rc genhtml_legend=1 00:20:13.731 --rc geninfo_all_blocks=1 00:20:13.731 --rc geninfo_unexecuted_blocks=1 00:20:13.731 00:20:13.731 ' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.731 --rc genhtml_branch_coverage=1 00:20:13.731 --rc genhtml_function_coverage=1 00:20:13.731 --rc genhtml_legend=1 00:20:13.731 --rc geninfo_all_blocks=1 00:20:13.731 --rc geninfo_unexecuted_blocks=1 00:20:13.731 00:20:13.731 ' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:13.731 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:13.732 Cannot find device "nvmf_init_br" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:13.732 Cannot find device "nvmf_init_br2" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:13.732 Cannot find device "nvmf_tgt_br" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.732 Cannot find device "nvmf_tgt_br2" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:13.732 Cannot find device "nvmf_init_br" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:13.732 Cannot find device "nvmf_init_br2" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:13.732 Cannot find device "nvmf_tgt_br" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:13.732 Cannot find device "nvmf_tgt_br2" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:13.732 Cannot find device "nvmf_br" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:13.732 Cannot find device "nvmf_init_if" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:13.732 Cannot find device "nvmf_init_if2" 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:13.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:20:13.732 00:20:13.732 --- 10.0.0.3 ping statistics --- 00:20:13.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.732 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:13.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:20:13.732 00:20:13.732 --- 10.0.0.4 ping statistics --- 00:20:13.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.732 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:13.732 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:20:13.732 00:20:13.732 --- 10.0.0.1 ping statistics --- 00:20:13.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.732 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:13.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:20:13.733 00:20:13.733 --- 10.0.0.2 ping statistics --- 00:20:13.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.733 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=79092 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 79092 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79092 ']' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.733 03:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:13.992 [2024-12-05 03:04:44.618352] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:13.992 [2024-12-05 03:04:44.618504] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.992 [2024-12-05 03:04:44.794815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.250 [2024-12-05 03:04:44.890833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.251 [2024-12-05 03:04:44.890897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.251 [2024-12-05 03:04:44.890918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.251 [2024-12-05 03:04:44.890942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.251 [2024-12-05 03:04:44.890957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.251 [2024-12-05 03:04:44.892166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.251 [2024-12-05 03:04:45.061538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:14.817 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.817 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:14.817 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.817 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.817 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79130 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=939baa4d-6b32-4cdd-8c7f-3ab92c1b6f56 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=01dfe8a3-566d-4bed-89e4-2533bc72c894 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0d64c3b3-4cf3-4565-bda2-5f998fa85860 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:15.076 null0 00:20:15.076 null1 00:20:15.076 null2 00:20:15.076 [2024-12-05 03:04:45.721324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.076 [2024-12-05 03:04:45.745460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79130 /var/tmp/tgt2.sock 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79130 ']' 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.076 03:04:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:15.076 [2024-12-05 03:04:45.798504] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:15.076 [2024-12-05 03:04:45.798673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79130 ] 00:20:15.334 [2024-12-05 03:04:45.979451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.335 [2024-12-05 03:04:46.076585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.594 [2024-12-05 03:04:46.299173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:16.161 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.161 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:16.161 03:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:16.420 [2024-12-05 03:04:47.261145] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.678 [2024-12-05 03:04:47.277396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:16.678 nvme0n1 nvme0n2 00:20:16.678 nvme1n1 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:16.678 03:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 939baa4d-6b32-4cdd-8c7f-3ab92c1b6f56 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=939baa4d6b324cdd8c7f3ab92c1b6f56 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 939BAA4D6B324CDD8C7F3AB92C1B6F56 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 939BAA4D6B324CDD8C7F3AB92C1B6F56 == \9\3\9\B\A\A\4\D\6\B\3\2\4\C\D\D\8\C\7\F\3\A\B\9\2\C\1\B\6\F\5\6 ]] 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 01dfe8a3-566d-4bed-89e4-2533bc72c894 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=01dfe8a3566d4bed89e42533bc72c894 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 01DFE8A3566D4BED89E42533BC72C894 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 01DFE8A3566D4BED89E42533BC72C894 == \0\1\D\F\E\8\A\3\5\6\6\D\4\B\E\D\8\9\E\4\2\5\3\3\B\C\7\2\C\8\9\4 ]] 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0d64c3b3-4cf3-4565-bda2-5f998fa85860 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0d64c3b34cf34565bda25f998fa85860 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0D64C3B34CF34565BDA25F998FA85860 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0D64C3B34CF34565BDA25F998FA85860 == \0\D\6\4\C\3\B\3\4\C\F\3\4\5\6\5\B\D\A\2\5\F\9\9\8\F\A\8\5\8\6\0 ]] 00:20:18.058 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79130 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79130 ']' 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79130 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79130 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.317 killing process with pid 79130 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79130' 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79130 00:20:18.317 03:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79130 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.243 rmmod nvme_tcp 00:20:20.243 rmmod nvme_fabrics 00:20:20.243 rmmod nvme_keyring 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 79092 ']' 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 79092 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79092 ']' 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79092 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79092 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.243 killing process with pid 79092 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79092' 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79092 00:20:20.243 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79092 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:21.181 00:20:21.181 real 0m7.998s 00:20:21.181 user 0m12.579s 00:20:21.181 sys 0m1.868s 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.181 ************************************ 00:20:21.181 END TEST nvmf_nsid 00:20:21.181 ************************************ 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:21.181 00:20:21.181 real 7m44.143s 00:20:21.181 user 18m50.413s 00:20:21.181 sys 1m52.295s 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.181 03:04:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.181 ************************************ 00:20:21.181 END TEST nvmf_target_extra 00:20:21.181 ************************************ 00:20:21.181 03:04:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:21.181 03:04:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.181 03:04:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.181 03:04:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.181 ************************************ 00:20:21.181 START TEST nvmf_host 00:20:21.181 ************************************ 00:20:21.181 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:21.442 * Looking for test storage... 00:20:21.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.442 --rc genhtml_branch_coverage=1 00:20:21.442 --rc genhtml_function_coverage=1 00:20:21.442 --rc genhtml_legend=1 00:20:21.442 --rc geninfo_all_blocks=1 00:20:21.442 --rc geninfo_unexecuted_blocks=1 00:20:21.442 00:20:21.442 ' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.442 --rc genhtml_branch_coverage=1 00:20:21.442 --rc genhtml_function_coverage=1 00:20:21.442 --rc genhtml_legend=1 00:20:21.442 --rc geninfo_all_blocks=1 00:20:21.442 --rc geninfo_unexecuted_blocks=1 00:20:21.442 00:20:21.442 ' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.442 --rc genhtml_branch_coverage=1 00:20:21.442 --rc genhtml_function_coverage=1 00:20:21.442 --rc genhtml_legend=1 00:20:21.442 --rc geninfo_all_blocks=1 00:20:21.442 --rc geninfo_unexecuted_blocks=1 00:20:21.442 00:20:21.442 ' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.442 --rc genhtml_branch_coverage=1 00:20:21.442 --rc genhtml_function_coverage=1 00:20:21.442 --rc genhtml_legend=1 00:20:21.442 --rc geninfo_all_blocks=1 00:20:21.442 --rc geninfo_unexecuted_blocks=1 00:20:21.442 00:20:21.442 ' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.442 03:04:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.443 ************************************ 00:20:21.443 START TEST nvmf_identify 00:20:21.443 ************************************ 00:20:21.443 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.704 * Looking for test storage... 00:20:21.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.704 --rc genhtml_branch_coverage=1 00:20:21.704 --rc genhtml_function_coverage=1 00:20:21.704 --rc genhtml_legend=1 00:20:21.704 --rc geninfo_all_blocks=1 00:20:21.704 --rc geninfo_unexecuted_blocks=1 00:20:21.704 00:20:21.704 ' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.704 --rc genhtml_branch_coverage=1 00:20:21.704 --rc genhtml_function_coverage=1 00:20:21.704 --rc genhtml_legend=1 00:20:21.704 --rc geninfo_all_blocks=1 00:20:21.704 --rc geninfo_unexecuted_blocks=1 00:20:21.704 00:20:21.704 ' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.704 --rc genhtml_branch_coverage=1 00:20:21.704 --rc genhtml_function_coverage=1 00:20:21.704 --rc genhtml_legend=1 00:20:21.704 --rc geninfo_all_blocks=1 00:20:21.704 --rc geninfo_unexecuted_blocks=1 00:20:21.704 00:20:21.704 ' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.704 --rc genhtml_branch_coverage=1 00:20:21.704 --rc genhtml_function_coverage=1 00:20:21.704 --rc genhtml_legend=1 00:20:21.704 --rc geninfo_all_blocks=1 00:20:21.704 --rc geninfo_unexecuted_blocks=1 00:20:21.704 00:20:21.704 ' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.704 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:21.705 Cannot find device "nvmf_init_br" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:21.705 Cannot find device "nvmf_init_br2" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:21.705 Cannot find device "nvmf_tgt_br" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.705 Cannot find device "nvmf_tgt_br2" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:21.705 Cannot find device "nvmf_init_br" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:21.705 Cannot find device "nvmf_init_br2" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:21.705 Cannot find device "nvmf_tgt_br" 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:21.705 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:21.964 Cannot find device "nvmf_tgt_br2" 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:21.964 Cannot find device "nvmf_br" 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:21.964 Cannot find device "nvmf_init_if" 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:21.964 Cannot find device "nvmf_init_if2" 00:20:21.964 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.965 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:22.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:22.224 00:20:22.224 --- 10.0.0.3 ping statistics --- 00:20:22.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.224 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:22.224 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:22.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:22.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:20:22.224 00:20:22.224 --- 10.0.0.4 ping statistics --- 00:20:22.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.224 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:22.225 00:20:22.225 --- 10.0.0.1 ping statistics --- 00:20:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.225 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:22.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:22.225 00:20:22.225 --- 10.0.0.2 ping statistics --- 00:20:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.225 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79515 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79515 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 79515 ']' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.225 03:04:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.225 [2024-12-05 03:04:52.966052] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:22.225 [2024-12-05 03:04:52.966193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.484 [2024-12-05 03:04:53.141551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.484 [2024-12-05 03:04:53.273823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.484 [2024-12-05 03:04:53.274161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.484 [2024-12-05 03:04:53.274356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.484 [2024-12-05 03:04:53.274617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.484 [2024-12-05 03:04:53.274681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.484 [2024-12-05 03:04:53.277053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.484 [2024-12-05 03:04:53.277224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.484 [2024-12-05 03:04:53.277316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.484 [2024-12-05 03:04:53.277907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.744 [2024-12-05 03:04:53.466473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:23.375 03:04:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.375 03:04:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:23.375 03:04:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.375 03:04:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 [2024-12-05 03:04:53.983367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 Malloc0 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 [2024-12-05 03:04:54.147489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.375 [ 00:20:23.375 { 00:20:23.375 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:23.375 "subtype": "Discovery", 00:20:23.375 "listen_addresses": [ 00:20:23.375 { 00:20:23.375 "trtype": "TCP", 00:20:23.375 "adrfam": "IPv4", 00:20:23.375 "traddr": "10.0.0.3", 00:20:23.375 "trsvcid": "4420" 00:20:23.375 } 00:20:23.375 ], 00:20:23.375 "allow_any_host": true, 00:20:23.375 "hosts": [] 00:20:23.375 }, 00:20:23.375 { 00:20:23.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.375 "subtype": "NVMe", 00:20:23.375 "listen_addresses": [ 00:20:23.375 { 00:20:23.375 "trtype": "TCP", 00:20:23.375 "adrfam": "IPv4", 00:20:23.375 "traddr": "10.0.0.3", 00:20:23.375 "trsvcid": "4420" 00:20:23.375 } 00:20:23.375 ], 00:20:23.375 "allow_any_host": true, 00:20:23.375 "hosts": [], 00:20:23.375 "serial_number": "SPDK00000000000001", 00:20:23.375 "model_number": "SPDK bdev Controller", 00:20:23.375 "max_namespaces": 32, 00:20:23.375 "min_cntlid": 1, 00:20:23.375 "max_cntlid": 65519, 00:20:23.375 "namespaces": [ 00:20:23.375 { 00:20:23.375 "nsid": 1, 00:20:23.375 "bdev_name": "Malloc0", 00:20:23.375 "name": "Malloc0", 00:20:23.375 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.375 "eui64": "ABCDEF0123456789", 00:20:23.375 "uuid": "e87b7d1b-f2aa-46ee-a53b-cd440f6e5327" 00:20:23.375 } 00:20:23.375 ] 00:20:23.375 } 00:20:23.375 ] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.375 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:23.637 [2024-12-05 03:04:54.227999] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:23.637 [2024-12-05 03:04:54.228089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79550 ] 00:20:23.637 [2024-12-05 03:04:54.410098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:23.637 [2024-12-05 03:04:54.410251] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.637 [2024-12-05 03:04:54.410266] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.637 [2024-12-05 03:04:54.410290] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.637 [2024-12-05 03:04:54.410307] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.637 [2024-12-05 03:04:54.410768] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:23.637 [2024-12-05 03:04:54.410873] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:23.637 [2024-12-05 03:04:54.423847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.637 [2024-12-05 03:04:54.423899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.637 [2024-12-05 03:04:54.423910] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.637 [2024-12-05 03:04:54.423917] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.637 [2024-12-05 03:04:54.424006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.424022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.424031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.637 [2024-12-05 03:04:54.424083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.637 [2024-12-05 03:04:54.424130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.637 [2024-12-05 03:04:54.431847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.637 [2024-12-05 03:04:54.431898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.637 [2024-12-05 03:04:54.431907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.431916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.637 [2024-12-05 03:04:54.431941] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.637 [2024-12-05 03:04:54.431959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:23.637 [2024-12-05 03:04:54.431970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:23.637 [2024-12-05 03:04:54.431997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.637 [2024-12-05 03:04:54.432034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-12-05 03:04:54.432090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.637 [2024-12-05 03:04:54.432174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.637 [2024-12-05 03:04:54.432188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.637 [2024-12-05 03:04:54.432196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.637 [2024-12-05 03:04:54.432225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:23.637 [2024-12-05 03:04:54.432240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:23.637 [2024-12-05 03:04:54.432272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.637 [2024-12-05 03:04:54.432309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-12-05 03:04:54.432343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.637 [2024-12-05 03:04:54.432427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.637 [2024-12-05 03:04:54.432440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.637 [2024-12-05 03:04:54.432447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.637 [2024-12-05 03:04:54.432465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:23.637 [2024-12-05 03:04:54.432486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.637 [2024-12-05 03:04:54.432504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.637 [2024-12-05 03:04:54.432521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.637 [2024-12-05 03:04:54.432535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-12-05 03:04:54.432563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.637 [2024-12-05 03:04:54.432628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.637 [2024-12-05 03:04:54.432641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.637 [2024-12-05 03:04:54.432647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.432654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.432665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.638 [2024-12-05 03:04:54.432683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.432696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.432704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.432718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-12-05 03:04:54.432748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.432829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.638 [2024-12-05 03:04:54.432843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.638 [2024-12-05 03:04:54.432850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.432857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.432870] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.638 [2024-12-05 03:04:54.432881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.638 [2024-12-05 03:04:54.432896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.638 [2024-12-05 03:04:54.433007] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:23.638 [2024-12-05 03:04:54.433016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.638 [2024-12-05 03:04:54.433032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.433063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-12-05 03:04:54.433098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.433176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.638 [2024-12-05 03:04:54.433188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.638 [2024-12-05 03:04:54.433195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.433212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.638 [2024-12-05 03:04:54.433230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.433267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-12-05 03:04:54.433295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.433353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.638 [2024-12-05 03:04:54.433366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.638 [2024-12-05 03:04:54.433372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.433388] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.638 [2024-12-05 03:04:54.433398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.433432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:23.638 [2024-12-05 03:04:54.433454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.433475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.433499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-12-05 03:04:54.433530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.433652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.638 [2024-12-05 03:04:54.433665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.638 [2024-12-05 03:04:54.433672] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433680] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:23.638 [2024-12-05 03:04:54.433689] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.638 [2024-12-05 03:04:54.433698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433712] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.638 [2024-12-05 03:04:54.433765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.638 [2024-12-05 03:04:54.433775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.433802] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:23.638 [2024-12-05 03:04:54.433812] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:23.638 [2024-12-05 03:04:54.433821] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:23.638 [2024-12-05 03:04:54.433831] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:23.638 [2024-12-05 03:04:54.433839] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:23.638 [2024-12-05 03:04:54.433849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.433869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.433886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.433908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.433924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.638 [2024-12-05 03:04:54.433955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.434028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.638 [2024-12-05 03:04:54.434041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.638 [2024-12-05 03:04:54.434053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.638 [2024-12-05 03:04:54.434075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.434108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.638 [2024-12-05 03:04:54.434120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.434147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.638 [2024-12-05 03:04:54.434157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.434183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.638 [2024-12-05 03:04:54.434193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.434217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.638 [2024-12-05 03:04:54.434226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.434248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.638 [2024-12-05 03:04:54.434262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.638 [2024-12-05 03:04:54.434273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.638 [2024-12-05 03:04:54.434286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-12-05 03:04:54.434317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.638 [2024-12-05 03:04:54.434333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:23.638 [2024-12-05 03:04:54.434342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:23.639 [2024-12-05 03:04:54.434349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.639 [2024-12-05 03:04:54.434357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.639 [2024-12-05 03:04:54.434470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.434495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.434504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.639 [2024-12-05 03:04:54.434523] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:23.639 [2024-12-05 03:04:54.434533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:23.639 [2024-12-05 03:04:54.434558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.639 [2024-12-05 03:04:54.434583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.639 [2024-12-05 03:04:54.434611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.639 [2024-12-05 03:04:54.434703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.639 [2024-12-05 03:04:54.434740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.639 [2024-12-05 03:04:54.434765] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434776] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:23.639 [2024-12-05 03:04:54.434785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.639 [2024-12-05 03:04:54.434794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434823] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.434851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.434858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.639 [2024-12-05 03:04:54.434893] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:23.639 [2024-12-05 03:04:54.434953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.434966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.639 [2024-12-05 03:04:54.434981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.639 [2024-12-05 03:04:54.434998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.639 [2024-12-05 03:04:54.435029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.639 [2024-12-05 03:04:54.435072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.639 [2024-12-05 03:04:54.435086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.639 [2024-12-05 03:04:54.435344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.639 [2024-12-05 03:04:54.435371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.639 [2024-12-05 03:04:54.435380] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:23.639 [2024-12-05 03:04:54.435402] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:23.639 [2024-12-05 03:04:54.435410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435422] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435430] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.435452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.435459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.639 [2024-12-05 03:04:54.435495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.435508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.435514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.639 [2024-12-05 03:04:54.435555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.435568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.639 [2024-12-05 03:04:54.435583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.639 [2024-12-05 03:04:54.435632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.639 [2024-12-05 03:04:54.435750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.639 [2024-12-05 03:04:54.439796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.639 [2024-12-05 03:04:54.439808] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439816] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:23.639 [2024-12-05 03:04:54.439825] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:23.639 [2024-12-05 03:04:54.439832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439846] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439853] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.439888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.439894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.639 [2024-12-05 03:04:54.439927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.439937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.639 [2024-12-05 03:04:54.439953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.639 [2024-12-05 03:04:54.439996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.639 [2024-12-05 03:04:54.440117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.639 [2024-12-05 03:04:54.440132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.639 [2024-12-05 03:04:54.440141] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.440149] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:23.639 [2024-12-05 03:04:54.440157] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:23.639 [2024-12-05 03:04:54.440165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.440176] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.440183] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.440208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.639 [2024-12-05 03:04:54.440219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.639 [2024-12-05 03:04:54.440226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.639 [2024-12-05 03:04:54.440233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.639 ===================================================== 00:20:23.639 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.639 ===================================================== 00:20:23.639 Controller Capabilities/Features 00:20:23.639 ================================ 00:20:23.639 Vendor ID: 0000 00:20:23.639 Subsystem Vendor ID: 0000 00:20:23.639 Serial Number: .................... 00:20:23.639 Model Number: ........................................ 00:20:23.639 Firmware Version: 25.01 00:20:23.639 Recommended Arb Burst: 0 00:20:23.639 IEEE OUI Identifier: 00 00 00 00:20:23.639 Multi-path I/O 00:20:23.639 May have multiple subsystem ports: No 00:20:23.639 May have multiple controllers: No 00:20:23.639 Associated with SR-IOV VF: No 00:20:23.639 Max Data Transfer Size: 131072 00:20:23.639 Max Number of Namespaces: 0 00:20:23.639 Max Number of I/O Queues: 1024 00:20:23.639 NVMe Specification Version (VS): 1.3 00:20:23.639 NVMe Specification Version (Identify): 1.3 00:20:23.639 Maximum Queue Entries: 128 00:20:23.639 Contiguous Queues Required: Yes 00:20:23.639 Arbitration Mechanisms Supported 00:20:23.639 Weighted Round Robin: Not Supported 00:20:23.639 Vendor Specific: Not Supported 00:20:23.639 Reset Timeout: 15000 ms 00:20:23.639 Doorbell Stride: 4 bytes 00:20:23.639 NVM Subsystem Reset: Not Supported 00:20:23.639 Command Sets Supported 00:20:23.639 NVM Command Set: Supported 00:20:23.639 Boot Partition: Not Supported 00:20:23.639 Memory Page Size Minimum: 4096 bytes 00:20:23.639 Memory Page Size Maximum: 4096 bytes 00:20:23.639 Persistent Memory Region: Not Supported 00:20:23.639 Optional Asynchronous Events Supported 00:20:23.639 Namespace Attribute Notices: Not Supported 00:20:23.640 Firmware Activation Notices: Not Supported 00:20:23.640 ANA Change Notices: Not Supported 00:20:23.640 PLE Aggregate Log Change Notices: Not Supported 00:20:23.640 LBA Status Info Alert Notices: Not Supported 00:20:23.640 EGE Aggregate Log Change Notices: Not Supported 00:20:23.640 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.640 Zone Descriptor Change Notices: Not Supported 00:20:23.640 Discovery Log Change Notices: Supported 00:20:23.640 Controller Attributes 00:20:23.640 128-bit Host Identifier: Not Supported 00:20:23.640 Non-Operational Permissive Mode: Not Supported 00:20:23.640 NVM Sets: Not Supported 00:20:23.640 Read Recovery Levels: Not Supported 00:20:23.640 Endurance Groups: Not Supported 00:20:23.640 Predictable Latency Mode: Not Supported 00:20:23.640 Traffic Based Keep ALive: Not Supported 00:20:23.640 Namespace Granularity: Not Supported 00:20:23.640 SQ Associations: Not Supported 00:20:23.640 UUID List: Not Supported 00:20:23.640 Multi-Domain Subsystem: Not Supported 00:20:23.640 Fixed Capacity Management: Not Supported 00:20:23.640 Variable Capacity Management: Not Supported 00:20:23.640 Delete Endurance Group: Not Supported 00:20:23.640 Delete NVM Set: Not Supported 00:20:23.640 Extended LBA Formats Supported: Not Supported 00:20:23.640 Flexible Data Placement Supported: Not Supported 00:20:23.640 00:20:23.640 Controller Memory Buffer Support 00:20:23.640 ================================ 00:20:23.640 Supported: No 00:20:23.640 00:20:23.640 Persistent Memory Region Support 00:20:23.640 ================================ 00:20:23.640 Supported: No 00:20:23.640 00:20:23.640 Admin Command Set Attributes 00:20:23.640 ============================ 00:20:23.640 Security Send/Receive: Not Supported 00:20:23.640 Format NVM: Not Supported 00:20:23.640 Firmware Activate/Download: Not Supported 00:20:23.640 Namespace Management: Not Supported 00:20:23.640 Device Self-Test: Not Supported 00:20:23.640 Directives: Not Supported 00:20:23.640 NVMe-MI: Not Supported 00:20:23.640 Virtualization Management: Not Supported 00:20:23.640 Doorbell Buffer Config: Not Supported 00:20:23.640 Get LBA Status Capability: Not Supported 00:20:23.640 Command & Feature Lockdown Capability: Not Supported 00:20:23.640 Abort Command Limit: 1 00:20:23.640 Async Event Request Limit: 4 00:20:23.640 Number of Firmware Slots: N/A 00:20:23.640 Firmware Slot 1 Read-Only: N/A 00:20:23.640 Firmware Activation Without Reset: N/A 00:20:23.640 Multiple Update Detection Support: N/A 00:20:23.640 Firmware Update Granularity: No Information Provided 00:20:23.640 Per-Namespace SMART Log: No 00:20:23.640 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.640 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.640 Command Effects Log Page: Not Supported 00:20:23.640 Get Log Page Extended Data: Supported 00:20:23.640 Telemetry Log Pages: Not Supported 00:20:23.640 Persistent Event Log Pages: Not Supported 00:20:23.640 Supported Log Pages Log Page: May Support 00:20:23.640 Commands Supported & Effects Log Page: Not Supported 00:20:23.640 Feature Identifiers & Effects Log Page:May Support 00:20:23.640 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.640 Data Area 4 for Telemetry Log: Not Supported 00:20:23.640 Error Log Page Entries Supported: 128 00:20:23.640 Keep Alive: Not Supported 00:20:23.640 00:20:23.640 NVM Command Set Attributes 00:20:23.640 ========================== 00:20:23.640 Submission Queue Entry Size 00:20:23.640 Max: 1 00:20:23.640 Min: 1 00:20:23.640 Completion Queue Entry Size 00:20:23.640 Max: 1 00:20:23.640 Min: 1 00:20:23.640 Number of Namespaces: 0 00:20:23.640 Compare Command: Not Supported 00:20:23.640 Write Uncorrectable Command: Not Supported 00:20:23.640 Dataset Management Command: Not Supported 00:20:23.640 Write Zeroes Command: Not Supported 00:20:23.640 Set Features Save Field: Not Supported 00:20:23.640 Reservations: Not Supported 00:20:23.640 Timestamp: Not Supported 00:20:23.640 Copy: Not Supported 00:20:23.640 Volatile Write Cache: Not Present 00:20:23.640 Atomic Write Unit (Normal): 1 00:20:23.640 Atomic Write Unit (PFail): 1 00:20:23.640 Atomic Compare & Write Unit: 1 00:20:23.640 Fused Compare & Write: Supported 00:20:23.640 Scatter-Gather List 00:20:23.640 SGL Command Set: Supported 00:20:23.640 SGL Keyed: Supported 00:20:23.640 SGL Bit Bucket Descriptor: Not Supported 00:20:23.640 SGL Metadata Pointer: Not Supported 00:20:23.640 Oversized SGL: Not Supported 00:20:23.640 SGL Metadata Address: Not Supported 00:20:23.640 SGL Offset: Supported 00:20:23.640 Transport SGL Data Block: Not Supported 00:20:23.640 Replay Protected Memory Block: Not Supported 00:20:23.640 00:20:23.640 Firmware Slot Information 00:20:23.640 ========================= 00:20:23.640 Active slot: 0 00:20:23.640 00:20:23.640 00:20:23.640 Error Log 00:20:23.640 ========= 00:20:23.640 00:20:23.640 Active Namespaces 00:20:23.640 ================= 00:20:23.640 Discovery Log Page 00:20:23.640 ================== 00:20:23.640 Generation Counter: 2 00:20:23.640 Number of Records: 2 00:20:23.640 Record Format: 0 00:20:23.640 00:20:23.640 Discovery Log Entry 0 00:20:23.640 ---------------------- 00:20:23.640 Transport Type: 3 (TCP) 00:20:23.640 Address Family: 1 (IPv4) 00:20:23.640 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.640 Entry Flags: 00:20:23.640 Duplicate Returned Information: 1 00:20:23.640 Explicit Persistent Connection Support for Discovery: 1 00:20:23.640 Transport Requirements: 00:20:23.640 Secure Channel: Not Required 00:20:23.640 Port ID: 0 (0x0000) 00:20:23.640 Controller ID: 65535 (0xffff) 00:20:23.640 Admin Max SQ Size: 128 00:20:23.640 Transport Service Identifier: 4420 00:20:23.640 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.640 Transport Address: 10.0.0.3 00:20:23.640 Discovery Log Entry 1 00:20:23.640 ---------------------- 00:20:23.640 Transport Type: 3 (TCP) 00:20:23.640 Address Family: 1 (IPv4) 00:20:23.640 Subsystem Type: 2 (NVM Subsystem) 00:20:23.640 Entry Flags: 00:20:23.640 Duplicate Returned Information: 0 00:20:23.640 Explicit Persistent Connection Support for Discovery: 0 00:20:23.640 Transport Requirements: 00:20:23.640 Secure Channel: Not Required 00:20:23.640 Port ID: 0 (0x0000) 00:20:23.640 Controller ID: 65535 (0xffff) 00:20:23.640 Admin Max SQ Size: 128 00:20:23.640 Transport Service Identifier: 4420 00:20:23.640 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.640 Transport Address: 10.0.0.3 [2024-12-05 03:04:54.440420] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:23.640 [2024-12-05 03:04:54.440452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.640 [2024-12-05 03:04:54.440467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.640 [2024-12-05 03:04:54.440478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:23.640 [2024-12-05 03:04:54.440487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.640 [2024-12-05 03:04:54.440499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:23.640 [2024-12-05 03:04:54.440509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.640 [2024-12-05 03:04:54.440518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.640 [2024-12-05 03:04:54.440527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.640 [2024-12-05 03:04:54.440543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.640 [2024-12-05 03:04:54.440552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.640 [2024-12-05 03:04:54.440560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.640 [2024-12-05 03:04:54.440579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.640 [2024-12-05 03:04:54.440615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.640 [2024-12-05 03:04:54.440695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.640 [2024-12-05 03:04:54.440709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.640 [2024-12-05 03:04:54.440717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.640 [2024-12-05 03:04:54.440725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.640 [2024-12-05 03:04:54.440743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.640 [2024-12-05 03:04:54.440769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.640 [2024-12-05 03:04:54.440783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.640 [2024-12-05 03:04:54.440801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.440839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.440953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.440966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.440972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.440979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.440989] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:23.641 [2024-12-05 03:04:54.441009] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:23.641 [2024-12-05 03:04:54.441028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.441161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.441176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.441183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.441209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.441390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.441412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.441419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.441445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.441564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.441576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.441583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.441608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.441717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.441729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.441736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.441776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.441895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.441907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.441913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.441938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.441953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.441970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.441995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.442058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.442075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.442083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.442113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.442141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.442166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.442237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.442259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.442268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.442294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.442321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.442347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.442409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.442422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.442428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.442453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.442485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.442511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.442578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.442590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.442597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.442621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.442649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.641 [2024-12-05 03:04:54.442679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.641 [2024-12-05 03:04:54.442768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.641 [2024-12-05 03:04:54.442793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.641 [2024-12-05 03:04:54.442801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.641 [2024-12-05 03:04:54.442828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.641 [2024-12-05 03:04:54.442847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.641 [2024-12-05 03:04:54.442861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.442891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.442955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.442968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.442974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.442982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.443003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.443032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.443057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.443121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.443135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.443143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.443167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.443195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.443220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.443283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.443296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.443302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.443330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.443359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.443384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.443443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.443455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.443461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.443485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.443513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.443537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.443596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.443608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.443614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.443639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.443653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.443670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.443696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.447808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.447841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.447850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.447858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.447881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.447896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.447904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.642 [2024-12-05 03:04:54.447921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.642 [2024-12-05 03:04:54.447956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.642 [2024-12-05 03:04:54.448028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.642 [2024-12-05 03:04:54.448040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.642 [2024-12-05 03:04:54.448050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.642 [2024-12-05 03:04:54.448058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.642 [2024-12-05 03:04:54.448073] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:20:23.901 00:20:23.901 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.901 [2024-12-05 03:04:54.569204] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:23.901 [2024-12-05 03:04:54.569570] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79553 ] 00:20:24.164 [2024-12-05 03:04:54.758910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:24.164 [2024-12-05 03:04:54.759048] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:24.164 [2024-12-05 03:04:54.759064] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:24.164 [2024-12-05 03:04:54.759103] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:24.164 [2024-12-05 03:04:54.759119] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:24.164 [2024-12-05 03:04:54.759497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:24.164 [2024-12-05 03:04:54.759583] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:24.164 [2024-12-05 03:04:54.763830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:24.164 [2024-12-05 03:04:54.763862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:24.164 [2024-12-05 03:04:54.763873] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:24.164 [2024-12-05 03:04:54.763883] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:24.164 [2024-12-05 03:04:54.763986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.764010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.764019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.764067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:24.164 [2024-12-05 03:04:54.764114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.771817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.771852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.771879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.771888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.164 [2024-12-05 03:04:54.771912] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:24.164 [2024-12-05 03:04:54.771930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:24.164 [2024-12-05 03:04:54.771942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:24.164 [2024-12-05 03:04:54.771965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.771978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.771987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.772003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.164 [2024-12-05 03:04:54.772042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.772140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.772157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.772168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.164 [2024-12-05 03:04:54.772191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:24.164 [2024-12-05 03:04:54.772207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:24.164 [2024-12-05 03:04:54.772221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.772258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.164 [2024-12-05 03:04:54.772290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.772358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.772370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.772377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.164 [2024-12-05 03:04:54.772395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:24.164 [2024-12-05 03:04:54.772415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:24.164 [2024-12-05 03:04:54.772429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.772463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.164 [2024-12-05 03:04:54.772492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.772557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.772570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.772577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.164 [2024-12-05 03:04:54.772594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:24.164 [2024-12-05 03:04:54.772616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.772652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.164 [2024-12-05 03:04:54.772692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.772753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.772766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.772778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.164 [2024-12-05 03:04:54.772813] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:24.164 [2024-12-05 03:04:54.772823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:24.164 [2024-12-05 03:04:54.772838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:24.164 [2024-12-05 03:04:54.772949] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:24.164 [2024-12-05 03:04:54.772958] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:24.164 [2024-12-05 03:04:54.772974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.164 [2024-12-05 03:04:54.772990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.164 [2024-12-05 03:04:54.773010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.164 [2024-12-05 03:04:54.773041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.164 [2024-12-05 03:04:54.773108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.164 [2024-12-05 03:04:54.773120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.164 [2024-12-05 03:04:54.773127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.773145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:24.165 [2024-12-05 03:04:54.773163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.773199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.165 [2024-12-05 03:04:54.773228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.165 [2024-12-05 03:04:54.773286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.165 [2024-12-05 03:04:54.773298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.165 [2024-12-05 03:04:54.773304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.773321] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:24.165 [2024-12-05 03:04:54.773330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.773361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:24.165 [2024-12-05 03:04:54.773383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.773404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.773428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.165 [2024-12-05 03:04:54.773459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.165 [2024-12-05 03:04:54.773595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.165 [2024-12-05 03:04:54.773621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.165 [2024-12-05 03:04:54.773629] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773638] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:24.165 [2024-12-05 03:04:54.773647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:24.165 [2024-12-05 03:04:54.773661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773687] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.165 [2024-12-05 03:04:54.773715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.165 [2024-12-05 03:04:54.773722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.773748] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:24.165 [2024-12-05 03:04:54.773774] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:24.165 [2024-12-05 03:04:54.773784] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:24.165 [2024-12-05 03:04:54.773793] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:24.165 [2024-12-05 03:04:54.773802] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:24.165 [2024-12-05 03:04:54.773812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.773841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.773863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.773884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.773899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.165 [2024-12-05 03:04:54.773932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.165 [2024-12-05 03:04:54.774001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.165 [2024-12-05 03:04:54.774014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.165 [2024-12-05 03:04:54.774023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.774045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.165 [2024-12-05 03:04:54.774096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.165 [2024-12-05 03:04:54.774131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.165 [2024-12-05 03:04:54.774172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.165 [2024-12-05 03:04:54.774206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.165 [2024-12-05 03:04:54.774296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:24.165 [2024-12-05 03:04:54.774311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:24.165 [2024-12-05 03:04:54.774320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:24.165 [2024-12-05 03:04:54.774328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.165 [2024-12-05 03:04:54.774336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.165 [2024-12-05 03:04:54.774436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.165 [2024-12-05 03:04:54.774463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.165 [2024-12-05 03:04:54.774472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.774491] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:24.165 [2024-12-05 03:04:54.774501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.165 [2024-12-05 03:04:54.774608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.165 [2024-12-05 03:04:54.774670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.165 [2024-12-05 03:04:54.774684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.165 [2024-12-05 03:04:54.774691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.165 [2024-12-05 03:04:54.774821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:24.165 [2024-12-05 03:04:54.774875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.165 [2024-12-05 03:04:54.774890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.165 [2024-12-05 03:04:54.774906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.165 [2024-12-05 03:04:54.774938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.166 [2024-12-05 03:04:54.775032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.166 [2024-12-05 03:04:54.775049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.166 [2024-12-05 03:04:54.775056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775064] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:24.166 [2024-12-05 03:04:54.775072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:24.166 [2024-12-05 03:04:54.775086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775102] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775110] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.775133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.775140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.775187] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:24.166 [2024-12-05 03:04:54.775211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.775238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.775256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.775287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.775318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.166 [2024-12-05 03:04:54.775457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.166 [2024-12-05 03:04:54.775481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.166 [2024-12-05 03:04:54.775490] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775497] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:24.166 [2024-12-05 03:04:54.775505] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:24.166 [2024-12-05 03:04:54.775516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775530] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775538] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.775561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.775567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.775613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.775638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.775660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.775670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.775685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.775716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.166 [2024-12-05 03:04:54.779780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.166 [2024-12-05 03:04:54.779811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.166 [2024-12-05 03:04:54.779820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.779827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:24.166 [2024-12-05 03:04:54.779836] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:24.166 [2024-12-05 03:04:54.779844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.779865] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.779873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.779894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.779904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.779910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.779917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.779955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.779974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.779992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.780007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.780016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.780029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.780041] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:24.166 [2024-12-05 03:04:54.780050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:24.166 [2024-12-05 03:04:54.780060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:24.166 [2024-12-05 03:04:54.780098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.780125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.780138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.780170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.166 [2024-12-05 03:04:54.780211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.166 [2024-12-05 03:04:54.780225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:24.166 [2024-12-05 03:04:54.780323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.780341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.780350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.780371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.780381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.780387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.780412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.780433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.780465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:24.166 [2024-12-05 03:04:54.780530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.780542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.780548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.780576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.780598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.780623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:24.166 [2024-12-05 03:04:54.780690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.780705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.780713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:24.166 [2024-12-05 03:04:54.780737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:24.166 [2024-12-05 03:04:54.780778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.166 [2024-12-05 03:04:54.780810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:24.166 [2024-12-05 03:04:54.780878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.166 [2024-12-05 03:04:54.780891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.166 [2024-12-05 03:04:54.780897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.166 [2024-12-05 03:04:54.780905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:24.167 [2024-12-05 03:04:54.780937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.780948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:24.167 [2024-12-05 03:04:54.780962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.167 [2024-12-05 03:04:54.780981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.780990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:24.167 [2024-12-05 03:04:54.781006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.167 [2024-12-05 03:04:54.781020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:24.167 [2024-12-05 03:04:54.781041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.167 [2024-12-05 03:04:54.781061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:24.167 [2024-12-05 03:04:54.781083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.167 [2024-12-05 03:04:54.781113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:24.167 [2024-12-05 03:04:54.781125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:24.167 [2024-12-05 03:04:54.781137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:24.167 [2024-12-05 03:04:54.781146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:24.167 [2024-12-05 03:04:54.781343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.167 [2024-12-05 03:04:54.781370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.167 [2024-12-05 03:04:54.781379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781391] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:24.167 [2024-12-05 03:04:54.781401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:24.167 [2024-12-05 03:04:54.781409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781455] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.167 [2024-12-05 03:04:54.781478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.167 [2024-12-05 03:04:54.781485] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781492] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:24.167 [2024-12-05 03:04:54.781500] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:24.167 [2024-12-05 03:04:54.781507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781524] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.167 [2024-12-05 03:04:54.781543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.167 [2024-12-05 03:04:54.781552] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:24.167 [2024-12-05 03:04:54.781567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:24.167 [2024-12-05 03:04:54.781574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781587] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781594] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.167 [2024-12-05 03:04:54.781612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.167 [2024-12-05 03:04:54.781619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781626] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:24.167 [2024-12-05 03:04:54.781634] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:24.167 [2024-12-05 03:04:54.781641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781662] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.167 [2024-12-05 03:04:54.781683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.167 [2024-12-05 03:04:54.781690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:24.167 [2024-12-05 03:04:54.781728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.167 [2024-12-05 03:04:54.781740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.167 [2024-12-05 03:04:54.781746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:24.167 [2024-12-05 03:04:54.781792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.167 [2024-12-05 03:04:54.781803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.167 [2024-12-05 03:04:54.781810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:24.167 [2024-12-05 03:04:54.781830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.167 [2024-12-05 03:04:54.781840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.167 [2024-12-05 03:04:54.781846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.167 [2024-12-05 03:04:54.781853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:24.167 ===================================================== 00:20:24.167 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.167 ===================================================== 00:20:24.167 Controller Capabilities/Features 00:20:24.167 ================================ 00:20:24.167 Vendor ID: 8086 00:20:24.167 Subsystem Vendor ID: 8086 00:20:24.167 Serial Number: SPDK00000000000001 00:20:24.167 Model Number: SPDK bdev Controller 00:20:24.167 Firmware Version: 25.01 00:20:24.167 Recommended Arb Burst: 6 00:20:24.167 IEEE OUI Identifier: e4 d2 5c 00:20:24.167 Multi-path I/O 00:20:24.167 May have multiple subsystem ports: Yes 00:20:24.167 May have multiple controllers: Yes 00:20:24.167 Associated with SR-IOV VF: No 00:20:24.167 Max Data Transfer Size: 131072 00:20:24.167 Max Number of Namespaces: 32 00:20:24.167 Max Number of I/O Queues: 127 00:20:24.167 NVMe Specification Version (VS): 1.3 00:20:24.167 NVMe Specification Version (Identify): 1.3 00:20:24.167 Maximum Queue Entries: 128 00:20:24.167 Contiguous Queues Required: Yes 00:20:24.167 Arbitration Mechanisms Supported 00:20:24.167 Weighted Round Robin: Not Supported 00:20:24.167 Vendor Specific: Not Supported 00:20:24.167 Reset Timeout: 15000 ms 00:20:24.167 Doorbell Stride: 4 bytes 00:20:24.167 NVM Subsystem Reset: Not Supported 00:20:24.167 Command Sets Supported 00:20:24.167 NVM Command Set: Supported 00:20:24.167 Boot Partition: Not Supported 00:20:24.167 Memory Page Size Minimum: 4096 bytes 00:20:24.167 Memory Page Size Maximum: 4096 bytes 00:20:24.167 Persistent Memory Region: Not Supported 00:20:24.167 Optional Asynchronous Events Supported 00:20:24.167 Namespace Attribute Notices: Supported 00:20:24.167 Firmware Activation Notices: Not Supported 00:20:24.167 ANA Change Notices: Not Supported 00:20:24.167 PLE Aggregate Log Change Notices: Not Supported 00:20:24.167 LBA Status Info Alert Notices: Not Supported 00:20:24.167 EGE Aggregate Log Change Notices: Not Supported 00:20:24.167 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.167 Zone Descriptor Change Notices: Not Supported 00:20:24.167 Discovery Log Change Notices: Not Supported 00:20:24.167 Controller Attributes 00:20:24.167 128-bit Host Identifier: Supported 00:20:24.167 Non-Operational Permissive Mode: Not Supported 00:20:24.167 NVM Sets: Not Supported 00:20:24.167 Read Recovery Levels: Not Supported 00:20:24.167 Endurance Groups: Not Supported 00:20:24.167 Predictable Latency Mode: Not Supported 00:20:24.167 Traffic Based Keep ALive: Not Supported 00:20:24.167 Namespace Granularity: Not Supported 00:20:24.167 SQ Associations: Not Supported 00:20:24.167 UUID List: Not Supported 00:20:24.167 Multi-Domain Subsystem: Not Supported 00:20:24.167 Fixed Capacity Management: Not Supported 00:20:24.167 Variable Capacity Management: Not Supported 00:20:24.167 Delete Endurance Group: Not Supported 00:20:24.167 Delete NVM Set: Not Supported 00:20:24.167 Extended LBA Formats Supported: Not Supported 00:20:24.167 Flexible Data Placement Supported: Not Supported 00:20:24.167 00:20:24.167 Controller Memory Buffer Support 00:20:24.167 ================================ 00:20:24.167 Supported: No 00:20:24.167 00:20:24.167 Persistent Memory Region Support 00:20:24.167 ================================ 00:20:24.167 Supported: No 00:20:24.167 00:20:24.167 Admin Command Set Attributes 00:20:24.168 ============================ 00:20:24.168 Security Send/Receive: Not Supported 00:20:24.168 Format NVM: Not Supported 00:20:24.168 Firmware Activate/Download: Not Supported 00:20:24.168 Namespace Management: Not Supported 00:20:24.168 Device Self-Test: Not Supported 00:20:24.168 Directives: Not Supported 00:20:24.168 NVMe-MI: Not Supported 00:20:24.168 Virtualization Management: Not Supported 00:20:24.168 Doorbell Buffer Config: Not Supported 00:20:24.168 Get LBA Status Capability: Not Supported 00:20:24.168 Command & Feature Lockdown Capability: Not Supported 00:20:24.168 Abort Command Limit: 4 00:20:24.168 Async Event Request Limit: 4 00:20:24.168 Number of Firmware Slots: N/A 00:20:24.168 Firmware Slot 1 Read-Only: N/A 00:20:24.168 Firmware Activation Without Reset: N/A 00:20:24.168 Multiple Update Detection Support: N/A 00:20:24.168 Firmware Update Granularity: No Information Provided 00:20:24.168 Per-Namespace SMART Log: No 00:20:24.168 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.168 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:24.168 Command Effects Log Page: Supported 00:20:24.168 Get Log Page Extended Data: Supported 00:20:24.168 Telemetry Log Pages: Not Supported 00:20:24.168 Persistent Event Log Pages: Not Supported 00:20:24.168 Supported Log Pages Log Page: May Support 00:20:24.168 Commands Supported & Effects Log Page: Not Supported 00:20:24.168 Feature Identifiers & Effects Log Page:May Support 00:20:24.168 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.168 Data Area 4 for Telemetry Log: Not Supported 00:20:24.168 Error Log Page Entries Supported: 128 00:20:24.168 Keep Alive: Supported 00:20:24.168 Keep Alive Granularity: 10000 ms 00:20:24.168 00:20:24.168 NVM Command Set Attributes 00:20:24.168 ========================== 00:20:24.168 Submission Queue Entry Size 00:20:24.168 Max: 64 00:20:24.168 Min: 64 00:20:24.168 Completion Queue Entry Size 00:20:24.168 Max: 16 00:20:24.168 Min: 16 00:20:24.168 Number of Namespaces: 32 00:20:24.168 Compare Command: Supported 00:20:24.168 Write Uncorrectable Command: Not Supported 00:20:24.168 Dataset Management Command: Supported 00:20:24.168 Write Zeroes Command: Supported 00:20:24.168 Set Features Save Field: Not Supported 00:20:24.168 Reservations: Supported 00:20:24.168 Timestamp: Not Supported 00:20:24.168 Copy: Supported 00:20:24.168 Volatile Write Cache: Present 00:20:24.168 Atomic Write Unit (Normal): 1 00:20:24.168 Atomic Write Unit (PFail): 1 00:20:24.168 Atomic Compare & Write Unit: 1 00:20:24.168 Fused Compare & Write: Supported 00:20:24.168 Scatter-Gather List 00:20:24.168 SGL Command Set: Supported 00:20:24.168 SGL Keyed: Supported 00:20:24.168 SGL Bit Bucket Descriptor: Not Supported 00:20:24.168 SGL Metadata Pointer: Not Supported 00:20:24.168 Oversized SGL: Not Supported 00:20:24.168 SGL Metadata Address: Not Supported 00:20:24.168 SGL Offset: Supported 00:20:24.168 Transport SGL Data Block: Not Supported 00:20:24.168 Replay Protected Memory Block: Not Supported 00:20:24.168 00:20:24.168 Firmware Slot Information 00:20:24.168 ========================= 00:20:24.168 Active slot: 1 00:20:24.168 Slot 1 Firmware Revision: 25.01 00:20:24.168 00:20:24.168 00:20:24.168 Commands Supported and Effects 00:20:24.168 ============================== 00:20:24.168 Admin Commands 00:20:24.168 -------------- 00:20:24.168 Get Log Page (02h): Supported 00:20:24.168 Identify (06h): Supported 00:20:24.168 Abort (08h): Supported 00:20:24.168 Set Features (09h): Supported 00:20:24.168 Get Features (0Ah): Supported 00:20:24.168 Asynchronous Event Request (0Ch): Supported 00:20:24.168 Keep Alive (18h): Supported 00:20:24.168 I/O Commands 00:20:24.168 ------------ 00:20:24.168 Flush (00h): Supported LBA-Change 00:20:24.168 Write (01h): Supported LBA-Change 00:20:24.168 Read (02h): Supported 00:20:24.168 Compare (05h): Supported 00:20:24.168 Write Zeroes (08h): Supported LBA-Change 00:20:24.168 Dataset Management (09h): Supported LBA-Change 00:20:24.168 Copy (19h): Supported LBA-Change 00:20:24.168 00:20:24.168 Error Log 00:20:24.168 ========= 00:20:24.168 00:20:24.168 Arbitration 00:20:24.168 =========== 00:20:24.168 Arbitration Burst: 1 00:20:24.168 00:20:24.168 Power Management 00:20:24.168 ================ 00:20:24.168 Number of Power States: 1 00:20:24.168 Current Power State: Power State #0 00:20:24.168 Power State #0: 00:20:24.168 Max Power: 0.00 W 00:20:24.168 Non-Operational State: Operational 00:20:24.168 Entry Latency: Not Reported 00:20:24.168 Exit Latency: Not Reported 00:20:24.168 Relative Read Throughput: 0 00:20:24.168 Relative Read Latency: 0 00:20:24.168 Relative Write Throughput: 0 00:20:24.168 Relative Write Latency: 0 00:20:24.168 Idle Power: Not Reported 00:20:24.168 Active Power: Not Reported 00:20:24.168 Non-Operational Permissive Mode: Not Supported 00:20:24.168 00:20:24.168 Health Information 00:20:24.168 ================== 00:20:24.168 Critical Warnings: 00:20:24.168 Available Spare Space: OK 00:20:24.168 Temperature: OK 00:20:24.168 Device Reliability: OK 00:20:24.168 Read Only: No 00:20:24.168 Volatile Memory Backup: OK 00:20:24.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:24.168 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:24.168 Available Spare: 0% 00:20:24.168 Available Spare Threshold: 0% 00:20:24.168 Life Percentage Used:[2024-12-05 03:04:54.782034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.168 [2024-12-05 03:04:54.782048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:24.168 [2024-12-05 03:04:54.782067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.168 [2024-12-05 03:04:54.782105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:24.168 [2024-12-05 03:04:54.782181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.168 [2024-12-05 03:04:54.782195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.168 [2024-12-05 03:04:54.782208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.168 [2024-12-05 03:04:54.782217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:24.168 [2024-12-05 03:04:54.782294] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:24.168 [2024-12-05 03:04:54.782332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:24.168 [2024-12-05 03:04:54.782357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.168 [2024-12-05 03:04:54.782372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:24.168 [2024-12-05 03:04:54.782390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.168 [2024-12-05 03:04:54.782399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:24.168 [2024-12-05 03:04:54.782408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.168 [2024-12-05 03:04:54.782416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.168 [2024-12-05 03:04:54.782426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.168 [2024-12-05 03:04:54.782441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.168 [2024-12-05 03:04:54.782450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.168 [2024-12-05 03:04:54.782463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.168 [2024-12-05 03:04:54.782479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.168 [2024-12-05 03:04:54.782520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.782613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.782628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.782636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.782659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.782690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.782745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.782863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.782882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.782889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.782906] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:24.169 [2024-12-05 03:04:54.782916] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:24.169 [2024-12-05 03:04:54.782935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.782952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.782970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.783002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.783066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.783088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.783096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.783127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.783158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.783184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.783248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.783265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.783278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.783305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.783334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.783361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.783421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.783433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.783439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.783464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.783496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.783522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.783589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.783600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.783607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.783632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.783647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.783660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.783684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.783747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.787794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.787807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.787816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.787849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.787859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.787867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:24.169 [2024-12-05 03:04:54.787885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.169 [2024-12-05 03:04:54.787921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:24.169 [2024-12-05 03:04:54.787991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.169 [2024-12-05 03:04:54.788004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.169 [2024-12-05 03:04:54.788010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.169 [2024-12-05 03:04:54.788017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:24.169 [2024-12-05 03:04:54.788032] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:20:24.169 0% 00:20:24.169 Data Units Read: 0 00:20:24.169 Data Units Written: 0 00:20:24.169 Host Read Commands: 0 00:20:24.169 Host Write Commands: 0 00:20:24.169 Controller Busy Time: 0 minutes 00:20:24.169 Power Cycles: 0 00:20:24.169 Power On Hours: 0 hours 00:20:24.169 Unsafe Shutdowns: 0 00:20:24.169 Unrecoverable Media Errors: 0 00:20:24.169 Lifetime Error Log Entries: 0 00:20:24.169 Warning Temperature Time: 0 minutes 00:20:24.169 Critical Temperature Time: 0 minutes 00:20:24.169 00:20:24.169 Number of Queues 00:20:24.169 ================ 00:20:24.169 Number of I/O Submission Queues: 127 00:20:24.169 Number of I/O Completion Queues: 127 00:20:24.169 00:20:24.169 Active Namespaces 00:20:24.169 ================= 00:20:24.169 Namespace ID:1 00:20:24.169 Error Recovery Timeout: Unlimited 00:20:24.169 Command Set Identifier: NVM (00h) 00:20:24.169 Deallocate: Supported 00:20:24.169 Deallocated/Unwritten Error: Not Supported 00:20:24.169 Deallocated Read Value: Unknown 00:20:24.169 Deallocate in Write Zeroes: Not Supported 00:20:24.169 Deallocated Guard Field: 0xFFFF 00:20:24.169 Flush: Supported 00:20:24.169 Reservation: Supported 00:20:24.169 Namespace Sharing Capabilities: Multiple Controllers 00:20:24.169 Size (in LBAs): 131072 (0GiB) 00:20:24.169 Capacity (in LBAs): 131072 (0GiB) 00:20:24.169 Utilization (in LBAs): 131072 (0GiB) 00:20:24.169 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:24.169 EUI64: ABCDEF0123456789 00:20:24.169 UUID: e87b7d1b-f2aa-46ee-a53b-cd440f6e5327 00:20:24.169 Thin Provisioning: Not Supported 00:20:24.169 Per-NS Atomic Units: Yes 00:20:24.169 Atomic Boundary Size (Normal): 0 00:20:24.169 Atomic Boundary Size (PFail): 0 00:20:24.169 Atomic Boundary Offset: 0 00:20:24.169 Maximum Single Source Range Length: 65535 00:20:24.169 Maximum Copy Length: 65535 00:20:24.169 Maximum Source Range Count: 1 00:20:24.169 NGUID/EUI64 Never Reused: No 00:20:24.169 Namespace Write Protected: No 00:20:24.169 Number of LBA Formats: 1 00:20:24.169 Current LBA Format: LBA Format #00 00:20:24.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:24.169 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:24.169 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.170 rmmod nvme_tcp 00:20:24.170 rmmod nvme_fabrics 00:20:24.170 rmmod nvme_keyring 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 79515 ']' 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 79515 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 79515 ']' 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 79515 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79515 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79515' 00:20:24.170 killing process with pid 79515 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 79515 00:20:24.170 03:04:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 79515 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:25.543 00:20:25.543 real 0m4.099s 00:20:25.543 user 0m11.127s 00:20:25.543 sys 0m0.942s 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.543 ************************************ 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:25.543 END TEST nvmf_identify 00:20:25.543 ************************************ 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.543 03:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.802 ************************************ 00:20:25.802 START TEST nvmf_perf 00:20:25.802 ************************************ 00:20:25.802 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:25.802 * Looking for test storage... 00:20:25.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:25.802 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:25.802 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:25.802 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:25.802 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.803 --rc genhtml_branch_coverage=1 00:20:25.803 --rc genhtml_function_coverage=1 00:20:25.803 --rc genhtml_legend=1 00:20:25.803 --rc geninfo_all_blocks=1 00:20:25.803 --rc geninfo_unexecuted_blocks=1 00:20:25.803 00:20:25.803 ' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.803 --rc genhtml_branch_coverage=1 00:20:25.803 --rc genhtml_function_coverage=1 00:20:25.803 --rc genhtml_legend=1 00:20:25.803 --rc geninfo_all_blocks=1 00:20:25.803 --rc geninfo_unexecuted_blocks=1 00:20:25.803 00:20:25.803 ' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.803 --rc genhtml_branch_coverage=1 00:20:25.803 --rc genhtml_function_coverage=1 00:20:25.803 --rc genhtml_legend=1 00:20:25.803 --rc geninfo_all_blocks=1 00:20:25.803 --rc geninfo_unexecuted_blocks=1 00:20:25.803 00:20:25.803 ' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:25.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.803 --rc genhtml_branch_coverage=1 00:20:25.803 --rc genhtml_function_coverage=1 00:20:25.803 --rc genhtml_legend=1 00:20:25.803 --rc geninfo_all_blocks=1 00:20:25.803 --rc geninfo_unexecuted_blocks=1 00:20:25.803 00:20:25.803 ' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.803 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:25.803 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:25.804 Cannot find device "nvmf_init_br" 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:25.804 Cannot find device "nvmf_init_br2" 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:25.804 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:26.062 Cannot find device "nvmf_tgt_br" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.062 Cannot find device "nvmf_tgt_br2" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:26.062 Cannot find device "nvmf_init_br" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:26.062 Cannot find device "nvmf_init_br2" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:26.062 Cannot find device "nvmf_tgt_br" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:26.062 Cannot find device "nvmf_tgt_br2" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:26.062 Cannot find device "nvmf_br" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:26.062 Cannot find device "nvmf_init_if" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:26.062 Cannot find device "nvmf_init_if2" 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.062 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.321 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.321 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:26.321 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:26.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:26.322 00:20:26.322 --- 10.0.0.3 ping statistics --- 00:20:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.322 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:26.322 03:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:26.322 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:26.322 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:20:26.322 00:20:26.322 --- 10.0.0.4 ping statistics --- 00:20:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:26.322 00:20:26.322 --- 10.0.0.1 ping statistics --- 00:20:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.322 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:26.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:26.322 00:20:26.322 --- 10.0.0.2 ping statistics --- 00:20:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.322 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=79785 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 79785 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 79785 ']' 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.322 03:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.581 [2024-12-05 03:04:57.165598] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:26.581 [2024-12-05 03:04:57.166010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.581 [2024-12-05 03:04:57.350203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.839 [2024-12-05 03:04:57.442950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.839 [2024-12-05 03:04:57.443302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.839 [2024-12-05 03:04:57.443453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.839 [2024-12-05 03:04:57.443594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.839 [2024-12-05 03:04:57.443641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.839 [2024-12-05 03:04:57.445430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.839 [2024-12-05 03:04:57.445543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.839 [2024-12-05 03:04:57.445672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.839 [2024-12-05 03:04:57.445692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.839 [2024-12-05 03:04:57.610781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:27.403 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:27.971 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:27.971 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:28.229 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:28.229 03:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:28.488 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:28.488 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:28.488 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:28.488 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:28.488 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.746 [2024-12-05 03:04:59.496285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.746 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.005 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:29.005 03:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.264 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:29.264 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:29.522 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.781 [2024-12-05 03:05:00.547340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.781 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:30.064 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:30.064 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:30.064 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:30.064 03:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:31.440 Initializing NVMe Controllers 00:20:31.440 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:31.440 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:31.440 Initialization complete. Launching workers. 00:20:31.440 ======================================================== 00:20:31.440 Latency(us) 00:20:31.440 Device Information : IOPS MiB/s Average min max 00:20:31.440 PCIE (0000:00:10.0) NSID 1 from core 0: 24494.85 95.68 1305.64 359.81 5185.68 00:20:31.440 ======================================================== 00:20:31.440 Total : 24494.85 95.68 1305.64 359.81 5185.68 00:20:31.440 00:20:31.440 03:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:32.820 Initializing NVMe Controllers 00:20:32.820 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.820 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.820 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.820 Initialization complete. Launching workers. 00:20:32.820 ======================================================== 00:20:32.820 Latency(us) 00:20:32.821 Device Information : IOPS MiB/s Average min max 00:20:32.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2921.93 11.41 341.84 132.08 5117.04 00:20:32.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8127.82 6926.17 12005.96 00:20:32.821 ======================================================== 00:20:32.821 Total : 3045.93 11.90 658.80 132.08 12005.96 00:20:32.821 00:20:32.821 03:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:34.200 Initializing NVMe Controllers 00:20:34.200 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.200 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.200 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.200 Initialization complete. Launching workers. 00:20:34.200 ======================================================== 00:20:34.200 Latency(us) 00:20:34.200 Device Information : IOPS MiB/s Average min max 00:20:34.200 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7742.24 30.24 4134.07 620.28 10715.97 00:20:34.200 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3911.48 15.28 8217.78 5974.76 16169.73 00:20:34.200 ======================================================== 00:20:34.200 Total : 11653.71 45.52 5504.74 620.28 16169.73 00:20:34.200 00:20:34.200 03:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:34.200 03:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:37.485 Initializing NVMe Controllers 00:20:37.485 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.485 Controller IO queue size 128, less than required. 00:20:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.485 Controller IO queue size 128, less than required. 00:20:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.485 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.485 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.486 Initialization complete. Launching workers. 00:20:37.486 ======================================================== 00:20:37.486 Latency(us) 00:20:37.486 Device Information : IOPS MiB/s Average min max 00:20:37.486 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1459.48 364.87 90644.32 42144.20 241098.51 00:20:37.486 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.99 143.75 235992.77 80267.33 443676.22 00:20:37.486 ======================================================== 00:20:37.486 Total : 2034.47 508.62 131723.39 42144.20 443676.22 00:20:37.486 00:20:37.486 03:05:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:37.486 Initializing NVMe Controllers 00:20:37.486 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.486 Controller IO queue size 128, less than required. 00:20:37.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.486 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:37.486 Controller IO queue size 128, less than required. 00:20:37.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.486 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:37.486 WARNING: Some requested NVMe devices were skipped 00:20:37.486 No valid NVMe controllers or AIO or URING devices found 00:20:37.486 03:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:40.778 Initializing NVMe Controllers 00:20:40.778 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.778 Controller IO queue size 128, less than required. 00:20:40.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.778 Controller IO queue size 128, less than required. 00:20:40.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.778 Initialization complete. Launching workers. 00:20:40.778 00:20:40.778 ==================== 00:20:40.778 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:40.778 TCP transport: 00:20:40.778 polls: 6532 00:20:40.778 idle_polls: 3032 00:20:40.778 sock_completions: 3500 00:20:40.778 nvme_completions: 5517 00:20:40.778 submitted_requests: 8200 00:20:40.778 queued_requests: 1 00:20:40.778 00:20:40.778 ==================== 00:20:40.778 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:40.778 TCP transport: 00:20:40.778 polls: 7515 00:20:40.778 idle_polls: 3820 00:20:40.778 sock_completions: 3695 00:20:40.778 nvme_completions: 5941 00:20:40.778 submitted_requests: 8992 00:20:40.778 queued_requests: 1 00:20:40.778 ======================================================== 00:20:40.778 Latency(us) 00:20:40.778 Device Information : IOPS MiB/s Average min max 00:20:40.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1378.85 344.71 94835.44 49956.28 207298.41 00:20:40.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.84 371.21 90576.44 44972.49 348275.44 00:20:40.778 ======================================================== 00:20:40.778 Total : 2863.70 715.92 92627.13 44972.49 348275.44 00:20:40.778 00:20:40.778 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:40.778 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.778 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:40.778 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:40.778 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=886191d1-e0e5-4454-8ee1-f80c676a2146 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 886191d1-e0e5-4454-8ee1-f80c676a2146 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=886191d1-e0e5-4454-8ee1-f80c676a2146 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:41.037 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:41.296 03:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:41.296 { 00:20:41.296 "uuid": "886191d1-e0e5-4454-8ee1-f80c676a2146", 00:20:41.296 "name": "lvs_0", 00:20:41.296 "base_bdev": "Nvme0n1", 00:20:41.296 "total_data_clusters": 1278, 00:20:41.296 "free_clusters": 1278, 00:20:41.296 "block_size": 4096, 00:20:41.296 "cluster_size": 4194304 00:20:41.296 } 00:20:41.296 ]' 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="886191d1-e0e5-4454-8ee1-f80c676a2146") .free_clusters' 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="886191d1-e0e5-4454-8ee1-f80c676a2146") .cluster_size' 00:20:41.296 5112 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:41.296 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 886191d1-e0e5-4454-8ee1-f80c676a2146 lbd_0 5112 00:20:41.555 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1d4fb46b-89d0-4c01-b943-0c5261c2b608 00:20:41.556 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1d4fb46b-89d0-4c01-b943-0c5261c2b608 lvs_n_0 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=46d1fdc4-fd2d-4c45-8185-26e2492c0215 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 46d1fdc4-fd2d-4c45-8185-26e2492c0215 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=46d1fdc4-fd2d-4c45-8185-26e2492c0215 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:42.123 { 00:20:42.123 "uuid": "886191d1-e0e5-4454-8ee1-f80c676a2146", 00:20:42.123 "name": "lvs_0", 00:20:42.123 "base_bdev": "Nvme0n1", 00:20:42.123 "total_data_clusters": 1278, 00:20:42.123 "free_clusters": 0, 00:20:42.123 "block_size": 4096, 00:20:42.123 "cluster_size": 4194304 00:20:42.123 }, 00:20:42.123 { 00:20:42.123 "uuid": "46d1fdc4-fd2d-4c45-8185-26e2492c0215", 00:20:42.123 "name": "lvs_n_0", 00:20:42.123 "base_bdev": "1d4fb46b-89d0-4c01-b943-0c5261c2b608", 00:20:42.123 "total_data_clusters": 1276, 00:20:42.123 "free_clusters": 1276, 00:20:42.123 "block_size": 4096, 00:20:42.123 "cluster_size": 4194304 00:20:42.123 } 00:20:42.123 ]' 00:20:42.123 03:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="46d1fdc4-fd2d-4c45-8185-26e2492c0215") .free_clusters' 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="46d1fdc4-fd2d-4c45-8185-26e2492c0215") .cluster_size' 00:20:42.382 5104 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:42.382 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 46d1fdc4-fd2d-4c45-8185-26e2492c0215 lbd_nest_0 5104 00:20:42.642 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b376bf08-9c3f-46f1-92b9-d6097be1fadc 00:20:42.642 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.902 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:42.902 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b376bf08-9c3f-46f1-92b9-d6097be1fadc 00:20:43.162 03:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:43.422 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:43.422 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:43.422 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:43.422 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.422 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:43.681 Initializing NVMe Controllers 00:20:43.681 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.681 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:43.681 WARNING: Some requested NVMe devices were skipped 00:20:43.681 No valid NVMe controllers or AIO or URING devices found 00:20:43.681 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.681 03:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:55.985 Initializing NVMe Controllers 00:20:55.985 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.985 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.985 Initialization complete. Launching workers. 00:20:55.985 ======================================================== 00:20:55.985 Latency(us) 00:20:55.985 Device Information : IOPS MiB/s Average min max 00:20:55.985 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 826.82 103.35 1207.29 400.75 8577.04 00:20:55.985 ======================================================== 00:20:55.985 Total : 826.82 103.35 1207.29 400.75 8577.04 00:20:55.985 00:20:55.985 03:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.985 03:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.985 03:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:55.985 Initializing NVMe Controllers 00:20:55.985 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.985 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:55.985 WARNING: Some requested NVMe devices were skipped 00:20:55.985 No valid NVMe controllers or AIO or URING devices found 00:20:55.985 03:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.985 03:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:05.967 Initializing NVMe Controllers 00:21:05.967 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.967 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.967 Initialization complete. Launching workers. 00:21:05.967 ======================================================== 00:21:05.967 Latency(us) 00:21:05.967 Device Information : IOPS MiB/s Average min max 00:21:05.967 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1317.16 164.65 24320.86 5475.98 71719.67 00:21:05.967 ======================================================== 00:21:05.967 Total : 1317.16 164.65 24320.86 5475.98 71719.67 00:21:05.967 00:21:05.967 03:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:05.967 03:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.967 03:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:05.967 Initializing NVMe Controllers 00:21:05.967 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.967 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:05.967 WARNING: Some requested NVMe devices were skipped 00:21:05.967 No valid NVMe controllers or AIO or URING devices found 00:21:05.967 03:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.967 03:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:15.950 Initializing NVMe Controllers 00:21:15.950 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.950 Controller IO queue size 128, less than required. 00:21:15.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.950 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.950 Initialization complete. Launching workers. 00:21:15.950 ======================================================== 00:21:15.950 Latency(us) 00:21:15.950 Device Information : IOPS MiB/s Average min max 00:21:15.950 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3656.05 457.01 35056.49 7371.53 86823.56 00:21:15.950 ======================================================== 00:21:15.950 Total : 3656.05 457.01 35056.49 7371.53 86823.56 00:21:15.950 00:21:15.950 03:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.208 03:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b376bf08-9c3f-46f1-92b9-d6097be1fadc 00:21:16.466 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:16.724 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1d4fb46b-89d0-4c01-b943-0c5261c2b608 00:21:16.982 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.240 03:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.240 rmmod nvme_tcp 00:21:17.240 rmmod nvme_fabrics 00:21:17.240 rmmod nvme_keyring 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 79785 ']' 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 79785 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 79785 ']' 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 79785 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.240 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79785 00:21:17.500 killing process with pid 79785 00:21:17.500 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.500 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.500 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79785' 00:21:17.500 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 79785 00:21:17.500 03:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 79785 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:18.911 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:18.912 00:21:18.912 real 0m53.313s 00:21:18.912 user 3m20.787s 00:21:18.912 sys 0m12.220s 00:21:18.912 ************************************ 00:21:18.912 END TEST nvmf_perf 00:21:18.912 ************************************ 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.912 03:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.173 ************************************ 00:21:19.173 START TEST nvmf_fio_host 00:21:19.173 ************************************ 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:19.173 * Looking for test storage... 00:21:19.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.173 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.173 --rc genhtml_branch_coverage=1 00:21:19.173 --rc genhtml_function_coverage=1 00:21:19.173 --rc genhtml_legend=1 00:21:19.173 --rc geninfo_all_blocks=1 00:21:19.174 --rc geninfo_unexecuted_blocks=1 00:21:19.174 00:21:19.174 ' 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:19.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.174 --rc genhtml_branch_coverage=1 00:21:19.174 --rc genhtml_function_coverage=1 00:21:19.174 --rc genhtml_legend=1 00:21:19.174 --rc geninfo_all_blocks=1 00:21:19.174 --rc geninfo_unexecuted_blocks=1 00:21:19.174 00:21:19.174 ' 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:19.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.174 --rc genhtml_branch_coverage=1 00:21:19.174 --rc genhtml_function_coverage=1 00:21:19.174 --rc genhtml_legend=1 00:21:19.174 --rc geninfo_all_blocks=1 00:21:19.174 --rc geninfo_unexecuted_blocks=1 00:21:19.174 00:21:19.174 ' 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:19.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.174 --rc genhtml_branch_coverage=1 00:21:19.174 --rc genhtml_function_coverage=1 00:21:19.174 --rc genhtml_legend=1 00:21:19.174 --rc geninfo_all_blocks=1 00:21:19.174 --rc geninfo_unexecuted_blocks=1 00:21:19.174 00:21:19.174 ' 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.174 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.175 03:05:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:19.175 Cannot find device "nvmf_init_br" 00:21:19.175 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:19.175 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:19.434 Cannot find device "nvmf_init_br2" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:19.434 Cannot find device "nvmf_tgt_br" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.434 Cannot find device "nvmf_tgt_br2" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:19.434 Cannot find device "nvmf_init_br" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:19.434 Cannot find device "nvmf_init_br2" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:19.434 Cannot find device "nvmf_tgt_br" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:19.434 Cannot find device "nvmf_tgt_br2" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:19.434 Cannot find device "nvmf_br" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:19.434 Cannot find device "nvmf_init_if" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:19.434 Cannot find device "nvmf_init_if2" 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:19.434 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:19.435 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:19.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:19.695 00:21:19.695 --- 10.0.0.3 ping statistics --- 00:21:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.695 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:19.695 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:19.695 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:21:19.695 00:21:19.695 --- 10.0.0.4 ping statistics --- 00:21:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.695 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:19.695 00:21:19.695 --- 10.0.0.1 ping statistics --- 00:21:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.695 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:19.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:21:19.695 00:21:19.695 --- 10.0.0.2 ping statistics --- 00:21:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.695 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:19.695 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80682 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80682 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 80682 ']' 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.696 03:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.696 [2024-12-05 03:05:50.525996] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:19.696 [2024-12-05 03:05:50.526154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.955 [2024-12-05 03:05:50.707391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.214 [2024-12-05 03:05:50.837988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.214 [2024-12-05 03:05:50.838057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.214 [2024-12-05 03:05:50.838085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.214 [2024-12-05 03:05:50.838103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.214 [2024-12-05 03:05:50.838121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.214 [2024-12-05 03:05:50.840397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.214 [2024-12-05 03:05:50.840534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.214 [2024-12-05 03:05:50.840711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.214 [2024-12-05 03:05:50.840597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.214 [2024-12-05 03:05:51.008341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.780 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.780 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:20.780 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:21.039 [2024-12-05 03:05:51.763340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.039 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:21.039 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.039 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.039 03:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:21.605 Malloc1 00:21:21.605 03:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.864 03:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:22.122 03:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:22.381 [2024-12-05 03:05:52.990434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:22.381 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:22.641 03:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:22.641 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:22.641 fio-3.35 00:21:22.641 Starting 1 thread 00:21:25.171 00:21:25.171 test: (groupid=0, jobs=1): err= 0: pid=80752: Thu Dec 5 03:05:55 2024 00:21:25.171 read: IOPS=7709, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2008msec) 00:21:25.171 slat (usec): min=2, max=287, avg= 3.51, stdev= 3.53 00:21:25.171 clat (usec): min=2396, max=15729, avg=8598.95, stdev=671.47 00:21:25.171 lat (usec): min=2453, max=15733, avg=8602.46, stdev=671.28 00:21:25.171 clat percentiles (usec): 00:21:25.171 | 1.00th=[ 7373], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8094], 00:21:25.171 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:25.171 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:25.171 | 99.00th=[10421], 99.50th=[10814], 99.90th=[13960], 99.95th=[14746], 00:21:25.171 | 99.99th=[15664] 00:21:25.171 bw ( KiB/s): min=29080, max=31928, per=100.00%, avg=30838.00, stdev=1256.05, samples=4 00:21:25.171 iops : min= 7270, max= 7982, avg=7709.50, stdev=314.01, samples=4 00:21:25.171 write: IOPS=7702, BW=30.1MiB/s (31.5MB/s)(60.4MiB/2008msec); 0 zone resets 00:21:25.171 slat (usec): min=2, max=191, avg= 3.63, stdev= 2.59 00:21:25.171 clat (usec): min=2148, max=15051, avg=7899.44, stdev=622.08 00:21:25.171 lat (usec): min=2161, max=15055, avg=7903.07, stdev=622.02 00:21:25.171 clat percentiles (usec): 00:21:25.171 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7439], 00:21:25.171 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:21:25.171 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8848], 00:21:25.171 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[12649], 99.95th=[13829], 00:21:25.171 | 99.99th=[15008] 00:21:25.171 bw ( KiB/s): min=30152, max=31496, per=99.98%, avg=30802.00, stdev=669.12, samples=4 00:21:25.171 iops : min= 7538, max= 7874, avg=7700.50, stdev=167.28, samples=4 00:21:25.171 lat (msec) : 4=0.11%, 10=98.61%, 20=1.28% 00:21:25.171 cpu : usr=68.46%, sys=22.92%, ctx=13, majf=0, minf=1553 00:21:25.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.171 issued rwts: total=15481,15466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.171 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.171 00:21:25.171 Run status group 0 (all jobs): 00:21:25.171 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2008-2008msec 00:21:25.171 WRITE: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=60.4MiB (63.3MB), run=2008-2008msec 00:21:25.430 ----------------------------------------------------- 00:21:25.430 Suppressions used: 00:21:25.430 count bytes template 00:21:25.430 1 57 /usr/src/fio/parse.c 00:21:25.430 1 8 libtcmalloc_minimal.so 00:21:25.430 ----------------------------------------------------- 00:21:25.430 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:25.430 03:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:25.430 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:25.430 fio-3.35 00:21:25.430 Starting 1 thread 00:21:27.967 00:21:27.967 test: (groupid=0, jobs=1): err= 0: pid=80799: Thu Dec 5 03:05:58 2024 00:21:27.967 read: IOPS=7133, BW=111MiB/s (117MB/s)(224MiB/2007msec) 00:21:27.967 slat (usec): min=3, max=150, avg= 4.52, stdev= 2.87 00:21:27.967 clat (usec): min=3246, max=20558, avg=10245.67, stdev=3208.71 00:21:27.967 lat (usec): min=3250, max=20562, avg=10250.19, stdev=3208.81 00:21:27.967 clat percentiles (usec): 00:21:27.967 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7373], 00:21:27.967 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10814], 00:21:27.967 | 70.00th=[11600], 80.00th=[12911], 90.00th=[14746], 95.00th=[16319], 00:21:27.967 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:21:27.967 | 99.99th=[19792] 00:21:27.967 bw ( KiB/s): min=51808, max=63328, per=49.57%, avg=56576.00, stdev=5241.95, samples=4 00:21:27.967 iops : min= 3238, max= 3958, avg=3536.00, stdev=327.62, samples=4 00:21:27.967 write: IOPS=4050, BW=63.3MiB/s (66.4MB/s)(115MiB/1824msec); 0 zone resets 00:21:27.967 slat (usec): min=32, max=250, avg=38.77, stdev= 9.63 00:21:27.967 clat (usec): min=7620, max=22718, avg=13876.10, stdev=2675.19 00:21:27.967 lat (usec): min=7654, max=22751, avg=13914.88, stdev=2676.76 00:21:27.967 clat percentiles (usec): 00:21:27.967 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[11469], 00:21:27.967 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13304], 60.00th=[14222], 00:21:27.967 | 70.00th=[15270], 80.00th=[16450], 90.00th=[17695], 95.00th=[18744], 00:21:27.967 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[22152], 00:21:27.967 | 99.99th=[22676] 00:21:27.967 bw ( KiB/s): min=52768, max=65408, per=90.62%, avg=58728.00, stdev=5440.34, samples=4 00:21:27.967 iops : min= 3298, max= 4088, avg=3670.50, stdev=340.02, samples=4 00:21:27.967 lat (msec) : 4=0.14%, 10=34.98%, 20=64.32%, 50=0.56% 00:21:27.967 cpu : usr=80.41%, sys=14.91%, ctx=7, majf=0, minf=2191 00:21:27.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:27.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.967 issued rwts: total=14316,7388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.967 00:21:27.967 Run status group 0 (all jobs): 00:21:27.967 READ: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=224MiB (235MB), run=2007-2007msec 00:21:27.967 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=115MiB (121MB), run=1824-1824msec 00:21:28.226 ----------------------------------------------------- 00:21:28.226 Suppressions used: 00:21:28.226 count bytes template 00:21:28.226 1 57 /usr/src/fio/parse.c 00:21:28.226 182 17472 /usr/src/fio/iolog.c 00:21:28.226 1 8 libtcmalloc_minimal.so 00:21:28.226 ----------------------------------------------------- 00:21:28.226 00:21:28.226 03:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:28.485 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:21:28.744 Nvme0n1 00:21:28.744 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a9aecd26-a8d2-46a7-88b2-998235f155bf 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a9aecd26-a8d2-46a7-88b2-998235f155bf 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=a9aecd26-a8d2-46a7-88b2-998235f155bf 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:29.003 03:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:29.263 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:29.263 { 00:21:29.263 "uuid": "a9aecd26-a8d2-46a7-88b2-998235f155bf", 00:21:29.263 "name": "lvs_0", 00:21:29.263 "base_bdev": "Nvme0n1", 00:21:29.263 "total_data_clusters": 4, 00:21:29.263 "free_clusters": 4, 00:21:29.263 "block_size": 4096, 00:21:29.263 "cluster_size": 1073741824 00:21:29.263 } 00:21:29.263 ]' 00:21:29.263 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a9aecd26-a8d2-46a7-88b2-998235f155bf") .free_clusters' 00:21:29.263 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:21:29.263 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a9aecd26-a8d2-46a7-88b2-998235f155bf") .cluster_size' 00:21:29.523 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:21:29.523 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:21:29.523 4096 00:21:29.523 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:21:29.523 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:29.782 10a034d7-c02f-4ba3-8d93-ba109a2d20fd 00:21:29.782 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:30.041 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:30.300 03:06:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:30.559 03:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:30.559 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:30.559 fio-3.35 00:21:30.559 Starting 1 thread 00:21:33.092 00:21:33.092 test: (groupid=0, jobs=1): err= 0: pid=80906: Thu Dec 5 03:06:03 2024 00:21:33.092 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(40.0MiB/2010msec) 00:21:33.092 slat (usec): min=2, max=197, avg= 3.45, stdev= 3.49 00:21:33.092 clat (usec): min=3230, max=23836, avg=13093.96, stdev=1114.41 00:21:33.092 lat (usec): min=3234, max=23840, avg=13097.41, stdev=1114.26 00:21:33.092 clat percentiles (usec): 00:21:33.092 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:21:33.092 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:21:33.092 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:21:33.092 | 99.00th=[15533], 99.50th=[16188], 99.90th=[20317], 99.95th=[21890], 00:21:33.092 | 99.99th=[22152] 00:21:33.092 bw ( KiB/s): min=19408, max=20800, per=99.71%, avg=20318.00, stdev=636.87, samples=4 00:21:33.092 iops : min= 4852, max= 5200, avg=5079.50, stdev=159.22, samples=4 00:21:33.092 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(39.8MiB/2010msec); 0 zone resets 00:21:33.092 slat (usec): min=2, max=126, avg= 3.40, stdev= 2.55 00:21:33.092 clat (usec): min=1997, max=21798, avg=11887.29, stdev=1024.33 00:21:33.092 lat (usec): min=2005, max=21801, avg=11890.68, stdev=1024.25 00:21:33.092 clat percentiles (usec): 00:21:33.092 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:21:33.092 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:21:33.092 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:21:33.092 | 99.00th=[14091], 99.50th=[14484], 99.90th=[19792], 99.95th=[20317], 00:21:33.092 | 99.99th=[20579] 00:21:33.092 bw ( KiB/s): min=20040, max=20608, per=100.00%, avg=20306.00, stdev=232.81, samples=4 00:21:33.092 iops : min= 5010, max= 5152, avg=5076.50, stdev=58.20, samples=4 00:21:33.092 lat (msec) : 2=0.01%, 4=0.06%, 10=1.01%, 20=98.79%, 50=0.13% 00:21:33.092 cpu : usr=74.07%, sys=20.11%, ctx=7, majf=0, minf=1554 00:21:33.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:33.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.092 issued rwts: total=10239,10200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.092 00:21:33.092 Run status group 0 (all jobs): 00:21:33.092 READ: bw=19.9MiB/s (20.9MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=40.0MiB (41.9MB), run=2010-2010msec 00:21:33.092 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=39.8MiB (41.8MB), run=2010-2010msec 00:21:33.352 ----------------------------------------------------- 00:21:33.352 Suppressions used: 00:21:33.352 count bytes template 00:21:33.352 1 58 /usr/src/fio/parse.c 00:21:33.352 1 8 libtcmalloc_minimal.so 00:21:33.352 ----------------------------------------------------- 00:21:33.352 00:21:33.352 03:06:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9176c1c3-bee5-41bd-a12d-97a7a307c152 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9176c1c3-bee5-41bd-a12d-97a7a307c152 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9176c1c3-bee5-41bd-a12d-97a7a307c152 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:33.611 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:33.870 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:33.870 { 00:21:33.870 "uuid": "a9aecd26-a8d2-46a7-88b2-998235f155bf", 00:21:33.870 "name": "lvs_0", 00:21:33.870 "base_bdev": "Nvme0n1", 00:21:33.870 "total_data_clusters": 4, 00:21:33.870 "free_clusters": 0, 00:21:33.870 "block_size": 4096, 00:21:33.870 "cluster_size": 1073741824 00:21:33.870 }, 00:21:33.870 { 00:21:33.870 "uuid": "9176c1c3-bee5-41bd-a12d-97a7a307c152", 00:21:33.870 "name": "lvs_n_0", 00:21:33.870 "base_bdev": "10a034d7-c02f-4ba3-8d93-ba109a2d20fd", 00:21:33.870 "total_data_clusters": 1022, 00:21:33.870 "free_clusters": 1022, 00:21:33.870 "block_size": 4096, 00:21:33.870 "cluster_size": 4194304 00:21:33.870 } 00:21:33.870 ]' 00:21:33.870 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9176c1c3-bee5-41bd-a12d-97a7a307c152") .free_clusters' 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9176c1c3-bee5-41bd-a12d-97a7a307c152") .cluster_size' 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:21:34.129 4088 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:21:34.129 03:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:34.410 a01535fd-c900-434f-b997-afbbd5ba5a28 00:21:34.410 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:34.679 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:34.937 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:35.196 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:35.196 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:35.196 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:35.196 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:35.196 03:06:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:35.196 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:35.196 fio-3.35 00:21:35.196 Starting 1 thread 00:21:37.731 00:21:37.731 test: (groupid=0, jobs=1): err= 0: pid=80976: Thu Dec 5 03:06:08 2024 00:21:37.731 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(35.9MiB/2011msec) 00:21:37.731 slat (usec): min=2, max=337, avg= 3.41, stdev= 4.61 00:21:37.731 clat (usec): min=4056, max=25876, avg=14585.52, stdev=1266.74 00:21:37.731 lat (usec): min=4072, max=25879, avg=14588.93, stdev=1266.24 00:21:37.731 clat percentiles (usec): 00:21:37.731 | 1.00th=[11994], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:21:37.731 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:21:37.731 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:21:37.731 | 99.00th=[17433], 99.50th=[18220], 99.90th=[24511], 99.95th=[25560], 00:21:37.731 | 99.99th=[25822] 00:21:37.731 bw ( KiB/s): min=17216, max=18728, per=99.75%, avg=18256.00, stdev=699.04, samples=4 00:21:37.731 iops : min= 4304, max= 4682, avg=4564.00, stdev=174.76, samples=4 00:21:37.731 write: IOPS=4574, BW=17.9MiB/s (18.7MB/s)(35.9MiB/2011msec); 0 zone resets 00:21:37.731 slat (usec): min=2, max=147, avg= 3.53, stdev= 3.01 00:21:37.731 clat (usec): min=2709, max=24526, avg=13203.79, stdev=1167.51 00:21:37.731 lat (usec): min=2726, max=24529, avg=13207.32, stdev=1167.21 00:21:37.731 clat percentiles (usec): 00:21:37.731 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12387], 00:21:37.731 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:21:37.731 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:21:37.731 | 99.00th=[15795], 99.50th=[16188], 99.90th=[20841], 99.95th=[22676], 00:21:37.731 | 99.99th=[24511] 00:21:37.731 bw ( KiB/s): min=18000, max=18560, per=99.96%, avg=18290.00, stdev=247.73, samples=4 00:21:37.731 iops : min= 4500, max= 4640, avg=4572.50, stdev=61.93, samples=4 00:21:37.731 lat (msec) : 4=0.01%, 10=0.40%, 20=99.37%, 50=0.22% 00:21:37.731 cpu : usr=77.41%, sys=17.66%, ctx=27, majf=0, minf=1553 00:21:37.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:37.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:37.731 issued rwts: total=9201,9199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:37.731 00:21:37.731 Run status group 0 (all jobs): 00:21:37.731 READ: bw=17.9MiB/s (18.7MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=35.9MiB (37.7MB), run=2011-2011msec 00:21:37.731 WRITE: bw=17.9MiB/s (18.7MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=35.9MiB (37.7MB), run=2011-2011msec 00:21:37.731 ----------------------------------------------------- 00:21:37.731 Suppressions used: 00:21:37.731 count bytes template 00:21:37.731 1 58 /usr/src/fio/parse.c 00:21:37.731 1 8 libtcmalloc_minimal.so 00:21:37.731 ----------------------------------------------------- 00:21:37.731 00:21:37.731 03:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:37.990 03:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:38.248 03:06:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:38.528 03:06:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:38.787 03:06:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:38.787 03:06:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:39.046 03:06:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.983 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.983 rmmod nvme_tcp 00:21:39.983 rmmod nvme_fabrics 00:21:39.983 rmmod nvme_keyring 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 80682 ']' 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 80682 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 80682 ']' 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 80682 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80682 00:21:39.984 killing process with pid 80682 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80682' 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 80682 00:21:39.984 03:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 80682 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:40.921 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:21:41.180 00:21:41.180 real 0m22.190s 00:21:41.180 user 1m34.629s 00:21:41.180 sys 0m4.730s 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.180 ************************************ 00:21:41.180 END TEST nvmf_fio_host 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.180 ************************************ 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.180 ************************************ 00:21:41.180 START TEST nvmf_failover 00:21:41.180 ************************************ 00:21:41.180 03:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:41.441 * Looking for test storage... 00:21:41.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.441 --rc genhtml_branch_coverage=1 00:21:41.441 --rc genhtml_function_coverage=1 00:21:41.441 --rc genhtml_legend=1 00:21:41.441 --rc geninfo_all_blocks=1 00:21:41.441 --rc geninfo_unexecuted_blocks=1 00:21:41.441 00:21:41.441 ' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.441 --rc genhtml_branch_coverage=1 00:21:41.441 --rc genhtml_function_coverage=1 00:21:41.441 --rc genhtml_legend=1 00:21:41.441 --rc geninfo_all_blocks=1 00:21:41.441 --rc geninfo_unexecuted_blocks=1 00:21:41.441 00:21:41.441 ' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.441 --rc genhtml_branch_coverage=1 00:21:41.441 --rc genhtml_function_coverage=1 00:21:41.441 --rc genhtml_legend=1 00:21:41.441 --rc geninfo_all_blocks=1 00:21:41.441 --rc geninfo_unexecuted_blocks=1 00:21:41.441 00:21:41.441 ' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.441 --rc genhtml_branch_coverage=1 00:21:41.441 --rc genhtml_function_coverage=1 00:21:41.441 --rc genhtml_legend=1 00:21:41.441 --rc geninfo_all_blocks=1 00:21:41.441 --rc geninfo_unexecuted_blocks=1 00:21:41.441 00:21:41.441 ' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.441 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.442 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:41.442 Cannot find device "nvmf_init_br" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:41.442 Cannot find device "nvmf_init_br2" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:41.442 Cannot find device "nvmf_tgt_br" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.442 Cannot find device "nvmf_tgt_br2" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:41.442 Cannot find device "nvmf_init_br" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:41.442 Cannot find device "nvmf_init_br2" 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:21:41.442 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:41.701 Cannot find device "nvmf_tgt_br" 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:41.701 Cannot find device "nvmf_tgt_br2" 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:41.701 Cannot find device "nvmf_br" 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:41.701 Cannot find device "nvmf_init_if" 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:41.701 Cannot find device "nvmf_init_if2" 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:41.701 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.702 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:41.961 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:41.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:21:41.961 00:21:41.961 --- 10.0.0.3 ping statistics --- 00:21:41.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.961 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:41.961 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:41.961 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:41.961 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:21:41.961 00:21:41.961 --- 10.0.0.4 ping statistics --- 00:21:41.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.961 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:41.961 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:41.961 00:21:41.961 --- 10.0.0.1 ping statistics --- 00:21:41.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.962 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:41.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:41.962 00:21:41.962 --- 10.0.0.2 ping statistics --- 00:21:41.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.962 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=81288 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 81288 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81288 ']' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.962 03:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.962 [2024-12-05 03:06:12.710208] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:41.962 [2024-12-05 03:06:12.710376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.220 [2024-12-05 03:06:12.900411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:42.220 [2024-12-05 03:06:13.027638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.221 [2024-12-05 03:06:13.027709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.221 [2024-12-05 03:06:13.027734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.221 [2024-12-05 03:06:13.027750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.221 [2024-12-05 03:06:13.027795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.221 [2024-12-05 03:06:13.029930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.221 [2024-12-05 03:06:13.030059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.221 [2024-12-05 03:06:13.030073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.479 [2024-12-05 03:06:13.255347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.045 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:43.304 [2024-12-05 03:06:13.960109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.304 03:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:43.563 Malloc0 00:21:43.563 03:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.822 03:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.081 03:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:44.340 [2024-12-05 03:06:15.076592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:44.340 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:44.597 [2024-12-05 03:06:15.300724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:44.597 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:44.855 [2024-12-05 03:06:15.521109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81346 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81346 /var/tmp/bdevperf.sock 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81346 ']' 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.855 03:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:45.790 03:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.790 03:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:45.790 03:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:46.357 NVMe0n1 00:21:46.357 03:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:46.617 00:21:46.617 03:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.617 03:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81374 00:21:46.617 03:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:47.557 03:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:47.816 03:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:51.104 03:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:51.104 00:21:51.104 03:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:51.363 03:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:54.652 03:06:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:54.652 [2024-12-05 03:06:25.471608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.910 03:06:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:55.854 03:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:56.113 03:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81374 00:22:02.766 { 00:22:02.766 "results": [ 00:22:02.766 { 00:22:02.766 "job": "NVMe0n1", 00:22:02.766 "core_mask": "0x1", 00:22:02.766 "workload": "verify", 00:22:02.766 "status": "finished", 00:22:02.766 "verify_range": { 00:22:02.766 "start": 0, 00:22:02.766 "length": 16384 00:22:02.766 }, 00:22:02.766 "queue_depth": 128, 00:22:02.766 "io_size": 4096, 00:22:02.766 "runtime": 15.011165, 00:22:02.766 "iops": 8170.251942470821, 00:22:02.766 "mibps": 31.915046650276643, 00:22:02.766 "io_failed": 3621, 00:22:02.766 "io_timeout": 0, 00:22:02.766 "avg_latency_us": 15185.40120467181, 00:22:02.766 "min_latency_us": 651.6363636363636, 00:22:02.766 "max_latency_us": 23116.334545454545 00:22:02.766 } 00:22:02.766 ], 00:22:02.766 "core_count": 1 00:22:02.766 } 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81346 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81346 ']' 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81346 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81346 00:22:02.766 killing process with pid 81346 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81346' 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81346 00:22:02.766 03:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81346 00:22:02.766 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:02.767 [2024-12-05 03:06:15.623735] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:02.767 [2024-12-05 03:06:15.623930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81346 ] 00:22:02.767 [2024-12-05 03:06:15.786514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.767 [2024-12-05 03:06:15.876614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.767 [2024-12-05 03:06:16.037282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:02.767 Running I/O for 15 seconds... 00:22:02.767 6294.00 IOPS, 24.59 MiB/s [2024-12-05T03:06:33.611Z] [2024-12-05 03:06:18.531473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.767 [2024-12-05 03:06:18.531573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.531928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.767 [2024-12-05 03:06:18.531970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.531991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.767 [2024-12-05 03:06:18.532943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.767 [2024-12-05 03:06:18.532965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.532986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.533964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.533985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.768 [2024-12-05 03:06:18.534395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.768 [2024-12-05 03:06:18.534427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.534959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.535964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.535987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.536011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.536037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.536061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.769 [2024-12-05 03:06:18.536121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.769 [2024-12-05 03:06:18.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.536957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.536982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.537015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.770 [2024-12-05 03:06:18.537064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.770 [2024-12-05 03:06:18.537672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.770 [2024-12-05 03:06:18.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:18.537711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.537734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:02.771 [2024-12-05 03:06:18.537776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.771 [2024-12-05 03:06:18.537797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.771 [2024-12-05 03:06:18.537814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59680 len:8 PRP1 0x0 PRP2 0x0 00:22:02.771 [2024-12-05 03:06:18.537833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.538088] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:02.771 [2024-12-05 03:06:18.538172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.771 [2024-12-05 03:06:18.538202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.538225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.771 [2024-12-05 03:06:18.538243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.538262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.771 [2024-12-05 03:06:18.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.538299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.771 [2024-12-05 03:06:18.538318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:18.538336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:02.771 [2024-12-05 03:06:18.538427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:02.771 [2024-12-05 03:06:18.542348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:02.771 [2024-12-05 03:06:18.571060] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:02.771 6986.50 IOPS, 27.29 MiB/s [2024-12-05T03:06:33.615Z] 7473.67 IOPS, 29.19 MiB/s [2024-12-05T03:06:33.615Z] 7733.25 IOPS, 30.21 MiB/s [2024-12-05T03:06:33.615Z] [2024-12-05 03:06:22.175259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.771 [2024-12-05 03:06:22.175669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.175995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.771 [2024-12-05 03:06:22.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.771 [2024-12-05 03:06:22.176302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.772 [2024-12-05 03:06:22.176356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.772 [2024-12-05 03:06:22.176392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.772 [2024-12-05 03:06:22.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.176974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.177003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.177045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.177083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.772 [2024-12-05 03:06:22.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.772 [2024-12-05 03:06:22.177160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.772 [2024-12-05 03:06:22.177198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.772 [2024-12-05 03:06:22.177232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.177707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.177968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.177988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.178007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.773 [2024-12-05 03:06:22.178058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.773 [2024-12-05 03:06:22.178569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.773 [2024-12-05 03:06:22.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.178886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.178930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.178995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.179302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.774 [2024-12-05 03:06:22.179966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.179986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.180004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.180024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.180043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.774 [2024-12-05 03:06:22.180063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.774 [2024-12-05 03:06:22.180081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.775 [2024-12-05 03:06:22.180821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.180919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.775 [2024-12-05 03:06:22.180944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.775 [2024-12-05 03:06:22.180962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49200 len:8 PRP1 0x0 PRP2 0x0 00:22:02.775 [2024-12-05 03:06:22.180982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.181281] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:02.775 [2024-12-05 03:06:22.181357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.775 [2024-12-05 03:06:22.181387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.181408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.775 [2024-12-05 03:06:22.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.181446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.775 [2024-12-05 03:06:22.181464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.181483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.775 [2024-12-05 03:06:22.181502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:22.181520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:02.775 [2024-12-05 03:06:22.181591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:02.775 [2024-12-05 03:06:22.185577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:02.775 [2024-12-05 03:06:22.215706] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:02.775 7774.80 IOPS, 30.37 MiB/s [2024-12-05T03:06:33.619Z] 7902.00 IOPS, 30.87 MiB/s [2024-12-05T03:06:33.619Z] 7996.00 IOPS, 31.23 MiB/s [2024-12-05T03:06:33.619Z] 8057.50 IOPS, 31.47 MiB/s [2024-12-05T03:06:33.619Z] 8096.44 IOPS, 31.63 MiB/s [2024-12-05T03:06:33.619Z] [2024-12-05 03:06:26.768501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.775 [2024-12-05 03:06:26.768924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.775 [2024-12-05 03:06:26.768943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.768960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.768980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.768997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.776 [2024-12-05 03:06:26.769893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.769970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.769989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.776 [2024-12-05 03:06:26.770227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.776 [2024-12-05 03:06:26.770245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.777 [2024-12-05 03:06:26.770513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.770977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.777 [2024-12-05 03:06:26.771345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.777 [2024-12-05 03:06:26.771374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.771432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.771476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.771514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.771551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.771971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.771990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.778 [2024-12-05 03:06:26.772248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.778 [2024-12-05 03:06:26.772695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.778 [2024-12-05 03:06:26.772712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.779 [2024-12-05 03:06:26.772750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.779 [2024-12-05 03:06:26.772809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.772847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.772923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.772976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.772996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.773014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.773051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.779 [2024-12-05 03:06:26.773089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:02.779 [2024-12-05 03:06:26.773131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104776 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105216 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105224 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105232 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105240 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105248 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105256 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105264 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105272 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105280 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105288 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105296 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.779 [2024-12-05 03:06:26.773901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.779 [2024-12-05 03:06:26.773914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105304 len:8 PRP1 0x0 PRP2 0x0 00:22:02.779 [2024-12-05 03:06:26.773940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.779 [2024-12-05 03:06:26.773958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.773972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.773986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105312 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.774031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.774044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105320 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.774091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.774104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105328 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.774149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.774163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105336 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.774212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105344 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.780 [2024-12-05 03:06:26.774272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.780 [2024-12-05 03:06:26.774285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105352 len:8 PRP1 0x0 PRP2 0x0 00:22:02.780 [2024-12-05 03:06:26.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774539] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:02.780 [2024-12-05 03:06:26.774612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.780 [2024-12-05 03:06:26.774640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.780 [2024-12-05 03:06:26.774691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.780 [2024-12-05 03:06:26.774729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.780 [2024-12-05 03:06:26.774813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.780 [2024-12-05 03:06:26.774859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:02.780 [2024-12-05 03:06:26.774935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:02.780 [2024-12-05 03:06:26.778600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:02.780 [2024-12-05 03:06:26.810338] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:02.780 8071.30 IOPS, 31.53 MiB/s [2024-12-05T03:06:33.624Z] 8100.82 IOPS, 31.64 MiB/s [2024-12-05T03:06:33.624Z] 8131.08 IOPS, 31.76 MiB/s [2024-12-05T03:06:33.624Z] 8138.54 IOPS, 31.79 MiB/s [2024-12-05T03:06:33.624Z] 8158.07 IOPS, 31.87 MiB/s [2024-12-05T03:06:33.624Z] 8169.40 IOPS, 31.91 MiB/s 00:22:02.780 Latency(us) 00:22:02.780 [2024-12-05T03:06:33.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:02.780 Verification LBA range: start 0x0 length 0x4000 00:22:02.780 NVMe0n1 : 15.01 8170.25 31.92 241.22 0.00 15185.40 651.64 23116.33 00:22:02.780 [2024-12-05T03:06:33.624Z] =================================================================================================================== 00:22:02.780 [2024-12-05T03:06:33.624Z] Total : 8170.25 31.92 241.22 0.00 15185.40 651.64 23116.33 00:22:02.780 Received shutdown signal, test time was about 15.000000 seconds 00:22:02.780 00:22:02.780 Latency(us) 00:22:02.780 [2024-12-05T03:06:33.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.780 [2024-12-05T03:06:33.624Z] =================================================================================================================== 00:22:02.780 [2024-12-05T03:06:33.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:02.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81555 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81555 /var/tmp/bdevperf.sock 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81555 ']' 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.780 03:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.719 03:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.719 03:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:03.719 03:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:03.719 [2024-12-05 03:06:34.517819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:03.719 03:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:03.978 [2024-12-05 03:06:34.758010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:03.978 03:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:04.237 NVMe0n1 00:22:04.495 03:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:04.752 00:22:04.752 03:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.010 00:22:05.010 03:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.010 03:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:05.268 03:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:05.527 03:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:08.817 03:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.817 03:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:08.817 03:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81633 00:22:08.817 03:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.817 03:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81633 00:22:10.197 { 00:22:10.197 "results": [ 00:22:10.197 { 00:22:10.197 "job": "NVMe0n1", 00:22:10.197 "core_mask": "0x1", 00:22:10.197 "workload": "verify", 00:22:10.197 "status": "finished", 00:22:10.197 "verify_range": { 00:22:10.197 "start": 0, 00:22:10.197 "length": 16384 00:22:10.197 }, 00:22:10.197 "queue_depth": 128, 00:22:10.197 "io_size": 4096, 00:22:10.197 "runtime": 1.005175, 00:22:10.197 "iops": 6416.793095729599, 00:22:10.197 "mibps": 25.065598030193748, 00:22:10.197 "io_failed": 0, 00:22:10.197 "io_timeout": 0, 00:22:10.197 "avg_latency_us": 19854.524346723047, 00:22:10.197 "min_latency_us": 1288.378181818182, 00:22:10.197 "max_latency_us": 17158.516363636365 00:22:10.197 } 00:22:10.197 ], 00:22:10.197 "core_count": 1 00:22:10.197 } 00:22:10.197 03:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:10.197 [2024-12-05 03:06:33.425940] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:10.197 [2024-12-05 03:06:33.426122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81555 ] 00:22:10.197 [2024-12-05 03:06:33.607522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.197 [2024-12-05 03:06:33.694914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.197 [2024-12-05 03:06:33.849652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:10.197 [2024-12-05 03:06:36.302828] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:10.197 [2024-12-05 03:06:36.302993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.197 [2024-12-05 03:06:36.303033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.197 [2024-12-05 03:06:36.303059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.197 [2024-12-05 03:06:36.303081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.197 [2024-12-05 03:06:36.303101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.197 [2024-12-05 03:06:36.303123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.197 [2024-12-05 03:06:36.303143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.197 [2024-12-05 03:06:36.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.197 [2024-12-05 03:06:36.303198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:10.197 [2024-12-05 03:06:36.303289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:10.197 [2024-12-05 03:06:36.303331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:10.197 [2024-12-05 03:06:36.315857] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:10.197 Running I/O for 1 seconds... 00:22:10.197 6322.00 IOPS, 24.70 MiB/s 00:22:10.197 Latency(us) 00:22:10.197 [2024-12-05T03:06:41.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.197 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:10.197 Verification LBA range: start 0x0 length 0x4000 00:22:10.197 NVMe0n1 : 1.01 6416.79 25.07 0.00 0.00 19854.52 1288.38 17158.52 00:22:10.197 [2024-12-05T03:06:41.041Z] =================================================================================================================== 00:22:10.197 [2024-12-05T03:06:41.041Z] Total : 6416.79 25.07 0.00 0.00 19854.52 1288.38 17158.52 00:22:10.197 03:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:10.197 03:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.457 03:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.457 03:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.457 03:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:10.716 03:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.284 03:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:14.572 03:06:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.572 03:06:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81555 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81555 ']' 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81555 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81555 00:22:14.572 killing process with pid 81555 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81555' 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81555 00:22:14.572 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81555 00:22:15.138 03:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:15.396 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.654 rmmod nvme_tcp 00:22:15.654 rmmod nvme_fabrics 00:22:15.654 rmmod nvme_keyring 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 81288 ']' 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 81288 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81288 ']' 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81288 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81288 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:15.654 killing process with pid 81288 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81288' 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81288 00:22:15.654 03:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81288 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:16.588 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:16.846 00:22:16.846 real 0m35.618s 00:22:16.846 user 2m15.899s 00:22:16.846 sys 0m5.734s 00:22:16.846 ************************************ 00:22:16.846 END TEST nvmf_failover 00:22:16.846 ************************************ 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.846 ************************************ 00:22:16.846 START TEST nvmf_host_discovery 00:22:16.846 ************************************ 00:22:16.846 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:17.106 * Looking for test storage... 00:22:17.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:17.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.106 --rc genhtml_branch_coverage=1 00:22:17.106 --rc genhtml_function_coverage=1 00:22:17.106 --rc genhtml_legend=1 00:22:17.106 --rc geninfo_all_blocks=1 00:22:17.106 --rc geninfo_unexecuted_blocks=1 00:22:17.106 00:22:17.106 ' 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:17.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.106 --rc genhtml_branch_coverage=1 00:22:17.106 --rc genhtml_function_coverage=1 00:22:17.106 --rc genhtml_legend=1 00:22:17.106 --rc geninfo_all_blocks=1 00:22:17.106 --rc geninfo_unexecuted_blocks=1 00:22:17.106 00:22:17.106 ' 00:22:17.106 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:17.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.106 --rc genhtml_branch_coverage=1 00:22:17.106 --rc genhtml_function_coverage=1 00:22:17.106 --rc genhtml_legend=1 00:22:17.106 --rc geninfo_all_blocks=1 00:22:17.106 --rc geninfo_unexecuted_blocks=1 00:22:17.106 00:22:17.106 ' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:17.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.107 --rc genhtml_branch_coverage=1 00:22:17.107 --rc genhtml_function_coverage=1 00:22:17.107 --rc genhtml_legend=1 00:22:17.107 --rc geninfo_all_blocks=1 00:22:17.107 --rc geninfo_unexecuted_blocks=1 00:22:17.107 00:22:17.107 ' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.107 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:17.107 Cannot find device "nvmf_init_br" 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:17.107 Cannot find device "nvmf_init_br2" 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:17.107 Cannot find device "nvmf_tgt_br" 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.107 Cannot find device "nvmf_tgt_br2" 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:17.107 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:17.367 Cannot find device "nvmf_init_br" 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:17.367 Cannot find device "nvmf_init_br2" 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:17.367 Cannot find device "nvmf_tgt_br" 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:17.367 Cannot find device "nvmf_tgt_br2" 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:17.367 Cannot find device "nvmf_br" 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:17.367 03:06:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:17.367 Cannot find device "nvmf_init_if" 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:17.367 Cannot find device "nvmf_init_if2" 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:17.367 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:17.368 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:17.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:17.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:22:17.628 00:22:17.628 --- 10.0.0.3 ping statistics --- 00:22:17.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.628 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:17.628 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:17.628 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:22:17.628 00:22:17.628 --- 10.0.0.4 ping statistics --- 00:22:17.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.628 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:17.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:22:17.628 00:22:17.628 --- 10.0.0.1 ping statistics --- 00:22:17.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.628 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:17.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:17.628 00:22:17.628 --- 10.0.0.2 ping statistics --- 00:22:17.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.628 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=81963 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 81963 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81963 ']' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.628 03:06:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.628 [2024-12-05 03:06:48.426697] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:17.628 [2024-12-05 03:06:48.427131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.888 [2024-12-05 03:06:48.609422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.888 [2024-12-05 03:06:48.687611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.888 [2024-12-05 03:06:48.687670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.888 [2024-12-05 03:06:48.687687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.888 [2024-12-05 03:06:48.687708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.888 [2024-12-05 03:06:48.687720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.888 [2024-12-05 03:06:48.688786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.147 [2024-12-05 03:06:48.831688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 [2024-12-05 03:06:49.356847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 [2024-12-05 03:06:49.365087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:18.716 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.717 null0 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.717 null1 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.717 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=81994 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 81994 /tmp/host.sock 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81994 ']' 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.717 03:06:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.717 [2024-12-05 03:06:49.489203] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:18.717 [2024-12-05 03:06:49.489544] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81994 ] 00:22:18.976 [2024-12-05 03:06:49.662890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.976 [2024-12-05 03:06:49.786202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.235 [2024-12-05 03:06:49.960079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 [2024-12-05 03:06:50.761483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.064 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.322 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:20.323 03:06:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:20.890 [2024-12-05 03:06:51.430964] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:20.890 [2024-12-05 03:06:51.430999] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:20.890 [2024-12-05 03:06:51.431036] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:20.890 [2024-12-05 03:06:51.437023] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:20.890 [2024-12-05 03:06:51.491550] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:20.890 [2024-12-05 03:06:51.493266] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:20.890 [2024-12-05 03:06:51.495685] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:20.890 [2024-12-05 03:06:51.495932] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:20.890 [2024-12-05 03:06:51.500196] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:21.149 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.150 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:21.150 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.409 03:06:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:21.409 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.410 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.410 [2024-12-05 03:06:52.224386] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:21.410 [2024-12-05 03:06:52.230484] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.670 [2024-12-05 03:06:52.328162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:21.670 [2024-12-05 03:06:52.329365] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:21.670 [2024-12-05 03:06:52.329419] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:21.670 [2024-12-05 03:06:52.335390] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cn 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:21.670 ode0:10.0.0.3:4421 new path for nvme0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:21.670 [2024-12-05 03:06:52.394179] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:21.670 [2024-12-05 03:06:52.394260] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:21.670 [2024-12-05 03:06:52.394279] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:21.670 [2024-12-05 03:06:52.394289] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:21.670 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.671 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.931 [2024-12-05 03:06:52.552467] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:21.931 [2024-12-05 03:06:52.552508] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:21.931 [2024-12-05 03:06:52.558508] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:21.931 [2024-12-05 03:06:52.558541] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:21.931 [2024-12-05 03:06:52.558728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.931 [2024-12-05 03:06:52.558795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.931 [2024-12-05 03:06:52.558817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.931 [2024-12-05 03:06:52.558831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.931 [2024-12-05 03:06:52.558859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.931 [2024-12-05 03:06:52.558900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.931 [2024-12-05 03:06:52.558915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.931 [2024-12-05 03:06:52.558928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.931 [2024-12-05 03:06:52.558941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:21.931 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.932 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.191 03:06:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.568 [2024-12-05 03:06:53.972129] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:23.568 [2024-12-05 03:06:53.972176] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:23.568 [2024-12-05 03:06:53.972213] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:23.568 [2024-12-05 03:06:53.978185] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:23.568 [2024-12-05 03:06:54.036660] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:23.568 [2024-12-05 03:06:54.037861] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:22:23.568 [2024-12-05 03:06:54.040288] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:23.568 [2024-12-05 03:06:54.040335] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.568 [2024-12-05 03:06:54.042429] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.568 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.568 request: 00:22:23.568 { 00:22:23.568 "name": "nvme", 00:22:23.568 "trtype": "tcp", 00:22:23.568 "traddr": "10.0.0.3", 00:22:23.568 "adrfam": "ipv4", 00:22:23.568 "trsvcid": "8009", 00:22:23.568 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:23.568 "wait_for_attach": true, 00:22:23.568 "method": "bdev_nvme_start_discovery", 00:22:23.568 "req_id": 1 00:22:23.568 } 00:22:23.568 Got JSON-RPC error response 00:22:23.568 response: 00:22:23.569 { 00:22:23.569 "code": -17, 00:22:23.569 "message": "File exists" 00:22:23.569 } 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.569 request: 00:22:23.569 { 00:22:23.569 "name": "nvme_second", 00:22:23.569 "trtype": "tcp", 00:22:23.569 "traddr": "10.0.0.3", 00:22:23.569 "adrfam": "ipv4", 00:22:23.569 "trsvcid": "8009", 00:22:23.569 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:23.569 "wait_for_attach": true, 00:22:23.569 "method": "bdev_nvme_start_discovery", 00:22:23.569 "req_id": 1 00:22:23.569 } 00:22:23.569 Got JSON-RPC error response 00:22:23.569 response: 00:22:23.569 { 00:22:23.569 "code": -17, 00:22:23.569 "message": "File exists" 00:22:23.569 } 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.569 03:06:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 [2024-12-05 03:06:55.308767] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-12-05 03:06:55.308839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:24.511 [2024-12-05 03:06:55.308894] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:24.511 [2024-12-05 03:06:55.308909] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:24.511 [2024-12-05 03:06:55.308921] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:25.884 [2024-12-05 03:06:56.308801] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-12-05 03:06:56.308871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:22:25.884 [2024-12-05 03:06:56.308924] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:25.884 [2024-12-05 03:06:56.308937] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:25.884 [2024-12-05 03:06:56.308949] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:26.819 [2024-12-05 03:06:57.308586] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:26.819 request: 00:22:26.819 { 00:22:26.819 "name": "nvme_second", 00:22:26.819 "trtype": "tcp", 00:22:26.819 "traddr": "10.0.0.3", 00:22:26.819 "adrfam": "ipv4", 00:22:26.819 "trsvcid": "8010", 00:22:26.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:26.819 "wait_for_attach": false, 00:22:26.819 "attach_timeout_ms": 3000, 00:22:26.819 "method": "bdev_nvme_start_discovery", 00:22:26.819 "req_id": 1 00:22:26.819 } 00:22:26.819 Got JSON-RPC error response 00:22:26.819 response: 00:22:26.819 { 00:22:26.819 "code": -110, 00:22:26.819 "message": "Connection timed out" 00:22:26.819 } 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 81994 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.819 rmmod nvme_tcp 00:22:26.819 rmmod nvme_fabrics 00:22:26.819 rmmod nvme_keyring 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 81963 ']' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 81963 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 81963 ']' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 81963 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81963 00:22:26.819 killing process with pid 81963 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81963' 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 81963 00:22:26.819 03:06:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 81963 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.756 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:27.756 ************************************ 00:22:27.757 END TEST nvmf_host_discovery 00:22:27.757 ************************************ 00:22:27.757 00:22:27.757 real 0m10.919s 00:22:27.757 user 0m20.423s 00:22:27.757 sys 0m2.073s 00:22:27.757 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.757 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.016 ************************************ 00:22:28.016 START TEST nvmf_host_multipath_status 00:22:28.016 ************************************ 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:28.016 * Looking for test storage... 00:22:28.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:28.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.016 --rc genhtml_branch_coverage=1 00:22:28.016 --rc genhtml_function_coverage=1 00:22:28.016 --rc genhtml_legend=1 00:22:28.016 --rc geninfo_all_blocks=1 00:22:28.016 --rc geninfo_unexecuted_blocks=1 00:22:28.016 00:22:28.016 ' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:28.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.016 --rc genhtml_branch_coverage=1 00:22:28.016 --rc genhtml_function_coverage=1 00:22:28.016 --rc genhtml_legend=1 00:22:28.016 --rc geninfo_all_blocks=1 00:22:28.016 --rc geninfo_unexecuted_blocks=1 00:22:28.016 00:22:28.016 ' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:28.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.016 --rc genhtml_branch_coverage=1 00:22:28.016 --rc genhtml_function_coverage=1 00:22:28.016 --rc genhtml_legend=1 00:22:28.016 --rc geninfo_all_blocks=1 00:22:28.016 --rc geninfo_unexecuted_blocks=1 00:22:28.016 00:22:28.016 ' 00:22:28.016 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:28.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.017 --rc genhtml_branch_coverage=1 00:22:28.017 --rc genhtml_function_coverage=1 00:22:28.017 --rc genhtml_legend=1 00:22:28.017 --rc geninfo_all_blocks=1 00:22:28.017 --rc geninfo_unexecuted_blocks=1 00:22:28.017 00:22:28.017 ' 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.017 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:28.017 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:28.276 Cannot find device "nvmf_init_br" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:28.276 Cannot find device "nvmf_init_br2" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:28.276 Cannot find device "nvmf_tgt_br" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.276 Cannot find device "nvmf_tgt_br2" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:28.276 Cannot find device "nvmf_init_br" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:28.276 Cannot find device "nvmf_init_br2" 00:22:28.276 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:28.277 Cannot find device "nvmf_tgt_br" 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:28.277 Cannot find device "nvmf_tgt_br2" 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:28.277 Cannot find device "nvmf_br" 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:28.277 Cannot find device "nvmf_init_if" 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:28.277 Cannot find device "nvmf_init_if2" 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:28.277 03:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:28.277 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:28.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:28.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:22:28.536 00:22:28.536 --- 10.0.0.3 ping statistics --- 00:22:28.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.536 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:28.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:28.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:22:28.536 00:22:28.536 --- 10.0.0.4 ping statistics --- 00:22:28.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.536 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:28.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:28.536 00:22:28.536 --- 10.0.0.1 ping statistics --- 00:22:28.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.536 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:28.536 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:28.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:22:28.537 00:22:28.537 --- 10.0.0.2 ping statistics --- 00:22:28.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.537 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=82507 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:28.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 82507 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82507 ']' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.537 03:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:28.796 [2024-12-05 03:06:59.396743] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:28.796 [2024-12-05 03:06:59.396867] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.796 [2024-12-05 03:06:59.571284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:29.059 [2024-12-05 03:06:59.697642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.059 [2024-12-05 03:06:59.697893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.059 [2024-12-05 03:06:59.698083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.059 [2024-12-05 03:06:59.698325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.059 [2024-12-05 03:06:59.698390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.059 [2024-12-05 03:06:59.700739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.059 [2024-12-05 03:06:59.700779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.332 [2024-12-05 03:06:59.906924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:29.607 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.607 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:29.607 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.607 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.607 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:29.884 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.884 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82507 00:22:29.884 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:29.884 [2024-12-05 03:07:00.661653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.884 03:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:30.450 Malloc0 00:22:30.450 03:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:30.450 03:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.709 03:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:30.967 [2024-12-05 03:07:01.746276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:30.967 03:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:31.225 [2024-12-05 03:07:02.030419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:31.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82564 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82564 /var/tmp/bdevperf.sock 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82564 ']' 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.225 03:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:32.601 03:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.601 03:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:32.601 03:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:32.601 03:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:33.169 Nvme0n1 00:22:33.169 03:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:33.429 Nvme0n1 00:22:33.429 03:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:33.429 03:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:35.335 03:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:35.335 03:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:35.593 03:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:35.850 03:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:36.785 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:36.785 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.785 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.785 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.044 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.044 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:37.044 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.044 03:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.303 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.303 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.303 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.303 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.562 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.562 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.562 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.562 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.820 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.820 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.820 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.820 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.079 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.079 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.079 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.079 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.339 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.339 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:38.339 03:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:38.597 03:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:38.856 03:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:39.793 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:39.793 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:39.793 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.793 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.053 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.053 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:40.053 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.053 03:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.312 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.312 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.312 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:40.312 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.571 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.571 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:40.571 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.571 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:40.830 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.830 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:40.830 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.830 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.089 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.089 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.089 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.089 03:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.348 03:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.348 03:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:41.348 03:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:41.606 03:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:41.865 03:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:42.801 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:42.801 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:42.801 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.801 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.060 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.060 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:43.060 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.060 03:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.320 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.320 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.320 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.320 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.579 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.579 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.579 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.579 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.838 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.838 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:43.838 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.838 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.096 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.096 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.096 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.096 03:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.356 03:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.356 03:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:44.356 03:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:44.614 03:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:44.872 03:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:45.807 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:45.807 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:45.807 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.807 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.066 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.066 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:46.066 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.066 03:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.325 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.325 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.325 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.325 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.592 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.592 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.592 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.592 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.853 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.853 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.853 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.853 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:47.111 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.111 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:47.111 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.112 03:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.371 03:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.371 03:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:47.371 03:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:47.630 03:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:47.889 03:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:48.826 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:48.826 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:48.826 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.826 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:49.085 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:49.085 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:49.085 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:49.085 03:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.344 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:49.344 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:49.344 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.344 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.602 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.602 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.602 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.602 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.861 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.861 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:49.861 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.861 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:50.120 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.120 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:50.120 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.120 03:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:50.380 03:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.380 03:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:50.380 03:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:50.639 03:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:50.898 03:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:51.835 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:51.835 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:51.835 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.835 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:52.094 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.094 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:52.094 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.094 03:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:52.353 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.353 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:52.353 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.353 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.922 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.181 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.181 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:53.181 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.181 03:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:53.440 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.440 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:53.700 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:53.700 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:53.960 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:54.220 03:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:55.154 03:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:55.155 03:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:55.155 03:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.155 03:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:55.414 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.414 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:55.414 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.414 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.673 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.673 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.673 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.673 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.931 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.931 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.931 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.931 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:56.190 03:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.190 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:56.190 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.190 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:56.757 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:57.016 03:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:57.274 03:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:58.231 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:58.231 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:58.231 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.231 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:58.495 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.495 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:58.496 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.496 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.764 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.764 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.764 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.764 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.038 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.039 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:59.039 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.039 03:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:59.297 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.297 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:59.297 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.297 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:59.557 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.557 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.557 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.557 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.815 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.815 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:59.815 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:00.074 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:00.332 03:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:01.268 03:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:01.268 03:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:01.268 03:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.268 03:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.527 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.527 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:01.527 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.527 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.786 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.786 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.786 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.786 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.047 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.047 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:02.047 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.047 03:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.305 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.305 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:02.305 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.305 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.564 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.564 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:02.564 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.564 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.822 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.822 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:02.822 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:03.080 03:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:03.338 03:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:04.273 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:04.273 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:04.273 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.273 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.530 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.530 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.530 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.530 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.787 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.787 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.787 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.787 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.044 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.045 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.045 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.045 03:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.303 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.303 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.303 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.303 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82564 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82564 ']' 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82564 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82564 00:23:05.870 killing process with pid 82564 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.870 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82564' 00:23:05.871 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82564 00:23:05.871 03:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82564 00:23:05.871 { 00:23:05.871 "results": [ 00:23:05.871 { 00:23:05.871 "job": "Nvme0n1", 00:23:05.871 "core_mask": "0x4", 00:23:05.871 "workload": "verify", 00:23:05.871 "status": "terminated", 00:23:05.871 "verify_range": { 00:23:05.871 "start": 0, 00:23:05.871 "length": 16384 00:23:05.871 }, 00:23:05.871 "queue_depth": 128, 00:23:05.871 "io_size": 4096, 00:23:05.871 "runtime": 32.528893, 00:23:05.871 "iops": 7980.935594703454, 00:23:05.871 "mibps": 31.175529666810366, 00:23:05.871 "io_failed": 0, 00:23:05.871 "io_timeout": 0, 00:23:05.871 "avg_latency_us": 16005.31352120183, 00:23:05.871 "min_latency_us": 1184.1163636363635, 00:23:05.871 "max_latency_us": 4026531.84 00:23:05.871 } 00:23:05.871 ], 00:23:05.871 "core_count": 1 00:23:05.871 } 00:23:06.814 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82564 00:23:06.814 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:06.814 [2024-12-05 03:07:02.131211] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:06.814 [2024-12-05 03:07:02.131362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82564 ] 00:23:06.814 [2024-12-05 03:07:02.292022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.814 [2024-12-05 03:07:02.385051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.814 [2024-12-05 03:07:02.536364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:06.814 Running I/O for 90 seconds... 00:23:06.814 8393.00 IOPS, 32.79 MiB/s [2024-12-05T03:07:37.658Z] 8484.50 IOPS, 33.14 MiB/s [2024-12-05T03:07:37.658Z] 8515.00 IOPS, 33.26 MiB/s [2024-12-05T03:07:37.658Z] 8486.25 IOPS, 33.15 MiB/s [2024-12-05T03:07:37.658Z] 8453.00 IOPS, 33.02 MiB/s [2024-12-05T03:07:37.658Z] 8498.50 IOPS, 33.20 MiB/s [2024-12-05T03:07:37.658Z] 8515.14 IOPS, 33.26 MiB/s [2024-12-05T03:07:37.658Z] 8525.88 IOPS, 33.30 MiB/s [2024-12-05T03:07:37.658Z] 8540.22 IOPS, 33.36 MiB/s [2024-12-05T03:07:37.658Z] 8533.80 IOPS, 33.34 MiB/s [2024-12-05T03:07:37.658Z] 8522.73 IOPS, 33.29 MiB/s [2024-12-05T03:07:37.658Z] 8535.17 IOPS, 33.34 MiB/s [2024-12-05T03:07:37.658Z] 8531.54 IOPS, 33.33 MiB/s [2024-12-05T03:07:37.658Z] 8518.43 IOPS, 33.28 MiB/s [2024-12-05T03:07:37.658Z] [2024-12-05 03:07:18.289421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.289967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.814 [2024-12-05 03:07:18.289989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.290942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.290969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.291016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.291039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.291069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.291092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.291124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.291147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.291177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.291200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:06.814 [2024-12-05 03:07:18.291230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.814 [2024-12-05 03:07:18.291282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.291865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.291938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.291967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.291988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.815 [2024-12-05 03:07:18.292691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.292978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:06.815 [2024-12-05 03:07:18.293295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.815 [2024-12-05 03:07:18.293316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.293363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.293412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.293469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.293517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.293938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.293960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.294392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.294971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.816 [2024-12-05 03:07:18.295417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.295464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.295512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.816 [2024-12-05 03:07:18.295539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.816 [2024-12-05 03:07:18.295568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.295619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.295667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.295715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.295762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.295810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.295871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.295922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.295971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.295997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.296018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.296066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.296113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.296161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:18.296220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:18.296965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:18.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:06.817 8016.93 IOPS, 31.32 MiB/s [2024-12-05T03:07:37.661Z] 7515.88 IOPS, 29.36 MiB/s [2024-12-05T03:07:37.661Z] 7073.76 IOPS, 27.63 MiB/s [2024-12-05T03:07:37.661Z] 6680.78 IOPS, 26.10 MiB/s [2024-12-05T03:07:37.661Z] 6719.00 IOPS, 26.25 MiB/s [2024-12-05T03:07:37.661Z] 6803.05 IOPS, 26.57 MiB/s [2024-12-05T03:07:37.661Z] 6950.81 IOPS, 27.15 MiB/s [2024-12-05T03:07:37.661Z] 7162.73 IOPS, 27.98 MiB/s [2024-12-05T03:07:37.661Z] 7341.00 IOPS, 28.68 MiB/s [2024-12-05T03:07:37.661Z] 7461.83 IOPS, 29.15 MiB/s [2024-12-05T03:07:37.661Z] 7507.20 IOPS, 29.32 MiB/s [2024-12-05T03:07:37.661Z] 7535.23 IOPS, 29.43 MiB/s [2024-12-05T03:07:37.661Z] 7589.30 IOPS, 29.65 MiB/s [2024-12-05T03:07:37.661Z] 7730.64 IOPS, 30.20 MiB/s [2024-12-05T03:07:37.661Z] 7846.66 IOPS, 30.65 MiB/s [2024-12-05T03:07:37.661Z] [2024-12-05 03:07:34.059968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:34.060272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:34.060317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:34.060363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.817 [2024-12-05 03:07:34.060408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.817 [2024-12-05 03:07:34.060697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.817 [2024-12-05 03:07:34.060737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.060805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.060837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.060858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.060902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.060929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.060958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.060980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.061846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.061969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.061989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.062045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.062140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.062188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.062236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.818 [2024-12-05 03:07:34.062291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.062339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:06.818 [2024-12-05 03:07:34.062366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.818 [2024-12-05 03:07:34.062386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.062433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.062632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.062681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.062840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.062887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.062956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.062979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.063959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.063981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.064007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.064029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.064056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.064091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.064120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.064140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.065833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.065871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.065909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.065933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.065961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.819 [2024-12-05 03:07:34.065981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.066028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.066055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.819 [2024-12-05 03:07:34.066076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:06.819 [2024-12-05 03:07:34.066102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.066733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.066930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.066980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.067846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.067939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.067960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.069609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.069646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.069682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.820 [2024-12-05 03:07:34.069706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.069734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.069756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.069801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.820 [2024-12-05 03:07:34.069823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:06.820 [2024-12-05 03:07:34.069851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.069933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.069958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.069986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.070940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.070972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.070995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.821 [2024-12-05 03:07:34.071911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.071941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.071962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.074014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.074053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.074138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.821 [2024-12-05 03:07:34.074177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.821 [2024-12-05 03:07:34.074205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.074229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.074746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.074844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.074954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.074977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.075966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.075988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.076038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.076087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.822 [2024-12-05 03:07:34.076193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.076242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.076288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.822 [2024-12-05 03:07:34.076334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:06.822 [2024-12-05 03:07:34.076359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.076379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.076404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.076424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.076450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.076470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.076496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.076516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.078661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.078721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.078789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.078863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.078941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.078987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.079954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.823 [2024-12-05 03:07:34.079976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.080986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:06.823 [2024-12-05 03:07:34.081403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.823 [2024-12-05 03:07:34.081423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.824 [2024-12-05 03:07:34.081470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.824 [2024-12-05 03:07:34.081517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.824 [2024-12-05 03:07:34.081562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.824 [2024-12-05 03:07:34.081610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.824 [2024-12-05 03:07:34.081680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.824 [2024-12-05 03:07:34.081728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:06.824 [2024-12-05 03:07:34.081755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.824 [2024-12-05 03:07:34.081808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:06.824 7945.70 IOPS, 31.04 MiB/s [2024-12-05T03:07:37.668Z] 7966.81 IOPS, 31.12 MiB/s [2024-12-05T03:07:37.668Z] 7977.84 IOPS, 31.16 MiB/s [2024-12-05T03:07:37.668Z] Received shutdown signal, test time was about 32.529662 seconds 00:23:06.824 00:23:06.824 Latency(us) 00:23:06.824 [2024-12-05T03:07:37.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.824 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.824 Verification LBA range: start 0x0 length 0x4000 00:23:06.824 Nvme0n1 : 32.53 7980.94 31.18 0.00 0.00 16005.31 1184.12 4026531.84 00:23:06.824 [2024-12-05T03:07:37.668Z] =================================================================================================================== 00:23:06.824 [2024-12-05T03:07:37.668Z] Total : 7980.94 31.18 0.00 0.00 16005.31 1184.12 4026531.84 00:23:06.824 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.083 rmmod nvme_tcp 00:23:07.083 rmmod nvme_fabrics 00:23:07.083 rmmod nvme_keyring 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 82507 ']' 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 82507 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82507 ']' 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82507 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82507 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.083 killing process with pid 82507 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82507' 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82507 00:23:07.083 03:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82507 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:08.462 03:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:08.462 00:23:08.462 real 0m40.475s 00:23:08.462 user 2m9.338s 00:23:08.462 sys 0m10.046s 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:08.462 ************************************ 00:23:08.462 END TEST nvmf_host_multipath_status 00:23:08.462 ************************************ 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.462 ************************************ 00:23:08.462 START TEST nvmf_discovery_remove_ifc 00:23:08.462 ************************************ 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:08.462 * Looking for test storage... 00:23:08.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:08.462 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:08.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.722 --rc genhtml_branch_coverage=1 00:23:08.722 --rc genhtml_function_coverage=1 00:23:08.722 --rc genhtml_legend=1 00:23:08.722 --rc geninfo_all_blocks=1 00:23:08.722 --rc geninfo_unexecuted_blocks=1 00:23:08.722 00:23:08.722 ' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:08.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.722 --rc genhtml_branch_coverage=1 00:23:08.722 --rc genhtml_function_coverage=1 00:23:08.722 --rc genhtml_legend=1 00:23:08.722 --rc geninfo_all_blocks=1 00:23:08.722 --rc geninfo_unexecuted_blocks=1 00:23:08.722 00:23:08.722 ' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:08.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.722 --rc genhtml_branch_coverage=1 00:23:08.722 --rc genhtml_function_coverage=1 00:23:08.722 --rc genhtml_legend=1 00:23:08.722 --rc geninfo_all_blocks=1 00:23:08.722 --rc geninfo_unexecuted_blocks=1 00:23:08.722 00:23:08.722 ' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:08.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.722 --rc genhtml_branch_coverage=1 00:23:08.722 --rc genhtml_function_coverage=1 00:23:08.722 --rc genhtml_legend=1 00:23:08.722 --rc geninfo_all_blocks=1 00:23:08.722 --rc geninfo_unexecuted_blocks=1 00:23:08.722 00:23:08.722 ' 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.722 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:08.723 Cannot find device "nvmf_init_br" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:08.723 Cannot find device "nvmf_init_br2" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:08.723 Cannot find device "nvmf_tgt_br" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.723 Cannot find device "nvmf_tgt_br2" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:08.723 Cannot find device "nvmf_init_br" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:08.723 Cannot find device "nvmf_init_br2" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:08.723 Cannot find device "nvmf_tgt_br" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:08.723 Cannot find device "nvmf_tgt_br2" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:08.723 Cannot find device "nvmf_br" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:08.723 Cannot find device "nvmf_init_if" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:08.723 Cannot find device "nvmf_init_if2" 00:23:08.723 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:08.724 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:08.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:08.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:08.983 00:23:08.983 --- 10.0.0.3 ping statistics --- 00:23:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.983 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:08.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:08.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:23:08.983 00:23:08.983 --- 10.0.0.4 ping statistics --- 00:23:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.983 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:08.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:08.983 00:23:08.983 --- 10.0.0.1 ping statistics --- 00:23:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.983 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:08.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:08.983 00:23:08.983 --- 10.0.0.2 ping statistics --- 00:23:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.983 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=83405 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 83405 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83405 ']' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.983 03:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.242 [2024-12-05 03:07:39.941264] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:09.242 [2024-12-05 03:07:39.941426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.502 [2024-12-05 03:07:40.130340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.502 [2024-12-05 03:07:40.253657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.502 [2024-12-05 03:07:40.253728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.502 [2024-12-05 03:07:40.253770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.502 [2024-12-05 03:07:40.253805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.502 [2024-12-05 03:07:40.253824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.502 [2024-12-05 03:07:40.255264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.761 [2024-12-05 03:07:40.473085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.330 03:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.330 [2024-12-05 03:07:40.980172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.330 [2024-12-05 03:07:40.988281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:10.330 null0 00:23:10.330 [2024-12-05 03:07:41.020227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:10.330 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83433 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83433 /tmp/host.sock 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83433 ']' 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.330 03:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.330 [2024-12-05 03:07:41.166397] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:10.330 [2024-12-05 03:07:41.166802] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83433 ] 00:23:10.589 [2024-12-05 03:07:41.355194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.848 [2024-12-05 03:07:41.479305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.416 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.676 [2024-12-05 03:07:42.282849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:11.676 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.676 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:11.676 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.676 03:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.634 [2024-12-05 03:07:43.394327] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:12.634 [2024-12-05 03:07:43.394529] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:12.634 [2024-12-05 03:07:43.394584] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:12.634 [2024-12-05 03:07:43.400397] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:12.634 [2024-12-05 03:07:43.463108] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:12.634 [2024-12-05 03:07:43.464561] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:12.634 [2024-12-05 03:07:43.466588] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:12.634 [2024-12-05 03:07:43.466835] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:12.634 [2024-12-05 03:07:43.466991] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:12.634 [2024-12-05 03:07:43.467098] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:12.634 [2024-12-05 03:07:43.467278] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.634 [2024-12-05 03:07:43.473676] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.634 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:12.893 03:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.847 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.848 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.848 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.848 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.848 03:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:15.232 03:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.168 03:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:17.106 03:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.041 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.041 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.042 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.301 [2024-12-05 03:07:48.894048] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:18.301 [2024-12-05 03:07:48.894119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.301 [2024-12-05 03:07:48.894154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.301 [2024-12-05 03:07:48.894186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.301 [2024-12-05 03:07:48.894198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.301 [2024-12-05 03:07:48.894210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.301 [2024-12-05 03:07:48.894222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.301 [2024-12-05 03:07:48.894233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.301 [2024-12-05 03:07:48.894244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.301 [2024-12-05 03:07:48.894255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.301 [2024-12-05 03:07:48.894267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.301 [2024-12-05 03:07:48.894277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:18.301 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.301 03:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.301 [2024-12-05 03:07:48.904038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:18.301 [2024-12-05 03:07:48.914064] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:18.302 [2024-12-05 03:07:48.914100] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:18.302 [2024-12-05 03:07:48.914127] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:18.302 [2024-12-05 03:07:48.914152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:18.302 [2024-12-05 03:07:48.914258] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.239 [2024-12-05 03:07:49.976859] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:19.239 [2024-12-05 03:07:49.977120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:19.239 [2024-12-05 03:07:49.977165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:19.239 [2024-12-05 03:07:49.977224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:19.239 [2024-12-05 03:07:49.978188] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:19.239 [2024-12-05 03:07:49.978281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:19.239 [2024-12-05 03:07:49.978323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:19.239 [2024-12-05 03:07:49.978347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:19.239 [2024-12-05 03:07:49.978369] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:19.239 [2024-12-05 03:07:49.978391] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:19.239 [2024-12-05 03:07:49.978405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:19.239 [2024-12-05 03:07:49.978427] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:19.239 [2024-12-05 03:07:49.978448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:19.239 03:07:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.239 03:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:19.239 03:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:20.177 [2024-12-05 03:07:50.978530] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.177 [2024-12-05 03:07:50.978591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.177 [2024-12-05 03:07:50.978621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.177 [2024-12-05 03:07:50.978650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.177 [2024-12-05 03:07:50.978663] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:20.177 [2024-12-05 03:07:50.978675] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.177 [2024-12-05 03:07:50.978684] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.177 [2024-12-05 03:07:50.978692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.177 [2024-12-05 03:07:50.978742] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:20.177 [2024-12-05 03:07:50.978802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.177 [2024-12-05 03:07:50.978824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.177 [2024-12-05 03:07:50.978847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.177 [2024-12-05 03:07:50.978859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.177 [2024-12-05 03:07:50.978871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.177 [2024-12-05 03:07:50.978897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.177 [2024-12-05 03:07:50.978969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.177 [2024-12-05 03:07:50.978997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.177 [2024-12-05 03:07:50.979011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.177 [2024-12-05 03:07:50.979039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.177 [2024-12-05 03:07:50.979052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:20.177 [2024-12-05 03:07:50.979491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:20.177 [2024-12-05 03:07:50.980517] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:20.177 [2024-12-05 03:07:50.980567] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.177 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:20.437 03:07:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:21.374 03:07:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:22.356 [2024-12-05 03:07:52.989447] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:22.356 [2024-12-05 03:07:52.989497] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:22.356 [2024-12-05 03:07:52.989528] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:22.356 [2024-12-05 03:07:52.995509] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:22.356 [2024-12-05 03:07:53.050035] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:22.356 [2024-12-05 03:07:53.051266] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:23:22.356 [2024-12-05 03:07:53.053227] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:22.356 [2024-12-05 03:07:53.053305] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:22.356 [2024-12-05 03:07:53.053364] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:22.356 [2024-12-05 03:07:53.053390] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:22.356 [2024-12-05 03:07:53.053404] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:22.356 [2024-12-05 03:07:53.058175] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83433 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83433 ']' 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83433 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83433 00:23:22.620 killing process with pid 83433 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83433' 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83433 00:23:22.620 03:07:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83433 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.559 rmmod nvme_tcp 00:23:23.559 rmmod nvme_fabrics 00:23:23.559 rmmod nvme_keyring 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 83405 ']' 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 83405 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83405 ']' 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83405 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83405 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83405' 00:23:23.559 killing process with pid 83405 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83405 00:23:23.559 03:07:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83405 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:24.496 00:23:24.496 real 0m16.150s 00:23:24.496 user 0m27.285s 00:23:24.496 sys 0m2.538s 00:23:24.496 ************************************ 00:23:24.496 END TEST nvmf_discovery_remove_ifc 00:23:24.496 ************************************ 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.496 03:07:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 ************************************ 00:23:24.755 START TEST nvmf_identify_kernel_target 00:23:24.755 ************************************ 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:24.755 * Looking for test storage... 00:23:24.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.755 --rc genhtml_branch_coverage=1 00:23:24.755 --rc genhtml_function_coverage=1 00:23:24.755 --rc genhtml_legend=1 00:23:24.755 --rc geninfo_all_blocks=1 00:23:24.755 --rc geninfo_unexecuted_blocks=1 00:23:24.755 00:23:24.755 ' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.755 --rc genhtml_branch_coverage=1 00:23:24.755 --rc genhtml_function_coverage=1 00:23:24.755 --rc genhtml_legend=1 00:23:24.755 --rc geninfo_all_blocks=1 00:23:24.755 --rc geninfo_unexecuted_blocks=1 00:23:24.755 00:23:24.755 ' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.755 --rc genhtml_branch_coverage=1 00:23:24.755 --rc genhtml_function_coverage=1 00:23:24.755 --rc genhtml_legend=1 00:23:24.755 --rc geninfo_all_blocks=1 00:23:24.755 --rc geninfo_unexecuted_blocks=1 00:23:24.755 00:23:24.755 ' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.755 --rc genhtml_branch_coverage=1 00:23:24.755 --rc genhtml_function_coverage=1 00:23:24.755 --rc genhtml_legend=1 00:23:24.755 --rc geninfo_all_blocks=1 00:23:24.755 --rc geninfo_unexecuted_blocks=1 00:23:24.755 00:23:24.755 ' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.755 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:24.756 Cannot find device "nvmf_init_br" 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:24.756 Cannot find device "nvmf_init_br2" 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:24.756 Cannot find device "nvmf_tgt_br" 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:24.756 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.014 Cannot find device "nvmf_tgt_br2" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:25.014 Cannot find device "nvmf_init_br" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:25.014 Cannot find device "nvmf_init_br2" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:25.014 Cannot find device "nvmf_tgt_br" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:25.014 Cannot find device "nvmf_tgt_br2" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:25.014 Cannot find device "nvmf_br" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:25.014 Cannot find device "nvmf_init_if" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:25.014 Cannot find device "nvmf_init_if2" 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:25.014 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:25.015 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:25.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:25.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:25.273 00:23:25.273 --- 10.0.0.3 ping statistics --- 00:23:25.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.273 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:25.273 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:25.273 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:23:25.273 00:23:25.273 --- 10.0.0.4 ping statistics --- 00:23:25.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.273 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:25.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:25.273 00:23:25.273 --- 10.0.0.1 ping statistics --- 00:23:25.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.273 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:25.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:23:25.273 00:23:25.273 --- 10.0.0.2 ping statistics --- 00:23:25.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.273 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:25.273 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:25.274 03:07:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:25.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:25.532 Waiting for block devices as requested 00:23:25.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:25.791 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:25.791 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:25.791 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:25.791 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:25.792 No valid GPT data, bailing 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:25.792 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:26.051 No valid GPT data, bailing 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:26.051 No valid GPT data, bailing 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:26.051 No valid GPT data, bailing 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -a 10.0.0.1 -t tcp -s 4420 00:23:26.051 00:23:26.051 Discovery Log Number of Records 2, Generation counter 2 00:23:26.051 =====Discovery Log Entry 0====== 00:23:26.051 trtype: tcp 00:23:26.051 adrfam: ipv4 00:23:26.051 subtype: current discovery subsystem 00:23:26.051 treq: not specified, sq flow control disable supported 00:23:26.051 portid: 1 00:23:26.051 trsvcid: 4420 00:23:26.051 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:26.051 traddr: 10.0.0.1 00:23:26.051 eflags: none 00:23:26.051 sectype: none 00:23:26.051 =====Discovery Log Entry 1====== 00:23:26.051 trtype: tcp 00:23:26.051 adrfam: ipv4 00:23:26.051 subtype: nvme subsystem 00:23:26.051 treq: not specified, sq flow control disable supported 00:23:26.051 portid: 1 00:23:26.051 trsvcid: 4420 00:23:26.051 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:26.051 traddr: 10.0.0.1 00:23:26.051 eflags: none 00:23:26.051 sectype: none 00:23:26.051 03:07:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:26.051 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:26.311 ===================================================== 00:23:26.311 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:26.311 ===================================================== 00:23:26.311 Controller Capabilities/Features 00:23:26.311 ================================ 00:23:26.311 Vendor ID: 0000 00:23:26.311 Subsystem Vendor ID: 0000 00:23:26.311 Serial Number: 0fb7f748656b9a2abf91 00:23:26.311 Model Number: Linux 00:23:26.311 Firmware Version: 6.8.9-20 00:23:26.311 Recommended Arb Burst: 0 00:23:26.311 IEEE OUI Identifier: 00 00 00 00:23:26.311 Multi-path I/O 00:23:26.311 May have multiple subsystem ports: No 00:23:26.311 May have multiple controllers: No 00:23:26.311 Associated with SR-IOV VF: No 00:23:26.311 Max Data Transfer Size: Unlimited 00:23:26.311 Max Number of Namespaces: 0 00:23:26.311 Max Number of I/O Queues: 1024 00:23:26.311 NVMe Specification Version (VS): 1.3 00:23:26.311 NVMe Specification Version (Identify): 1.3 00:23:26.311 Maximum Queue Entries: 1024 00:23:26.311 Contiguous Queues Required: No 00:23:26.311 Arbitration Mechanisms Supported 00:23:26.311 Weighted Round Robin: Not Supported 00:23:26.311 Vendor Specific: Not Supported 00:23:26.311 Reset Timeout: 7500 ms 00:23:26.311 Doorbell Stride: 4 bytes 00:23:26.311 NVM Subsystem Reset: Not Supported 00:23:26.311 Command Sets Supported 00:23:26.311 NVM Command Set: Supported 00:23:26.311 Boot Partition: Not Supported 00:23:26.311 Memory Page Size Minimum: 4096 bytes 00:23:26.311 Memory Page Size Maximum: 4096 bytes 00:23:26.311 Persistent Memory Region: Not Supported 00:23:26.311 Optional Asynchronous Events Supported 00:23:26.311 Namespace Attribute Notices: Not Supported 00:23:26.311 Firmware Activation Notices: Not Supported 00:23:26.311 ANA Change Notices: Not Supported 00:23:26.311 PLE Aggregate Log Change Notices: Not Supported 00:23:26.311 LBA Status Info Alert Notices: Not Supported 00:23:26.311 EGE Aggregate Log Change Notices: Not Supported 00:23:26.311 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.311 Zone Descriptor Change Notices: Not Supported 00:23:26.311 Discovery Log Change Notices: Supported 00:23:26.311 Controller Attributes 00:23:26.311 128-bit Host Identifier: Not Supported 00:23:26.311 Non-Operational Permissive Mode: Not Supported 00:23:26.311 NVM Sets: Not Supported 00:23:26.311 Read Recovery Levels: Not Supported 00:23:26.311 Endurance Groups: Not Supported 00:23:26.311 Predictable Latency Mode: Not Supported 00:23:26.311 Traffic Based Keep ALive: Not Supported 00:23:26.311 Namespace Granularity: Not Supported 00:23:26.311 SQ Associations: Not Supported 00:23:26.311 UUID List: Not Supported 00:23:26.311 Multi-Domain Subsystem: Not Supported 00:23:26.311 Fixed Capacity Management: Not Supported 00:23:26.311 Variable Capacity Management: Not Supported 00:23:26.311 Delete Endurance Group: Not Supported 00:23:26.311 Delete NVM Set: Not Supported 00:23:26.311 Extended LBA Formats Supported: Not Supported 00:23:26.311 Flexible Data Placement Supported: Not Supported 00:23:26.311 00:23:26.311 Controller Memory Buffer Support 00:23:26.311 ================================ 00:23:26.311 Supported: No 00:23:26.311 00:23:26.311 Persistent Memory Region Support 00:23:26.311 ================================ 00:23:26.311 Supported: No 00:23:26.311 00:23:26.311 Admin Command Set Attributes 00:23:26.311 ============================ 00:23:26.311 Security Send/Receive: Not Supported 00:23:26.311 Format NVM: Not Supported 00:23:26.311 Firmware Activate/Download: Not Supported 00:23:26.311 Namespace Management: Not Supported 00:23:26.311 Device Self-Test: Not Supported 00:23:26.311 Directives: Not Supported 00:23:26.311 NVMe-MI: Not Supported 00:23:26.311 Virtualization Management: Not Supported 00:23:26.311 Doorbell Buffer Config: Not Supported 00:23:26.311 Get LBA Status Capability: Not Supported 00:23:26.312 Command & Feature Lockdown Capability: Not Supported 00:23:26.312 Abort Command Limit: 1 00:23:26.312 Async Event Request Limit: 1 00:23:26.312 Number of Firmware Slots: N/A 00:23:26.312 Firmware Slot 1 Read-Only: N/A 00:23:26.572 Firmware Activation Without Reset: N/A 00:23:26.572 Multiple Update Detection Support: N/A 00:23:26.572 Firmware Update Granularity: No Information Provided 00:23:26.572 Per-Namespace SMART Log: No 00:23:26.572 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.572 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:26.572 Command Effects Log Page: Not Supported 00:23:26.572 Get Log Page Extended Data: Supported 00:23:26.572 Telemetry Log Pages: Not Supported 00:23:26.572 Persistent Event Log Pages: Not Supported 00:23:26.572 Supported Log Pages Log Page: May Support 00:23:26.572 Commands Supported & Effects Log Page: Not Supported 00:23:26.572 Feature Identifiers & Effects Log Page:May Support 00:23:26.572 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.572 Data Area 4 for Telemetry Log: Not Supported 00:23:26.572 Error Log Page Entries Supported: 1 00:23:26.572 Keep Alive: Not Supported 00:23:26.572 00:23:26.572 NVM Command Set Attributes 00:23:26.572 ========================== 00:23:26.572 Submission Queue Entry Size 00:23:26.572 Max: 1 00:23:26.572 Min: 1 00:23:26.572 Completion Queue Entry Size 00:23:26.572 Max: 1 00:23:26.572 Min: 1 00:23:26.572 Number of Namespaces: 0 00:23:26.572 Compare Command: Not Supported 00:23:26.572 Write Uncorrectable Command: Not Supported 00:23:26.572 Dataset Management Command: Not Supported 00:23:26.572 Write Zeroes Command: Not Supported 00:23:26.572 Set Features Save Field: Not Supported 00:23:26.572 Reservations: Not Supported 00:23:26.572 Timestamp: Not Supported 00:23:26.572 Copy: Not Supported 00:23:26.572 Volatile Write Cache: Not Present 00:23:26.572 Atomic Write Unit (Normal): 1 00:23:26.572 Atomic Write Unit (PFail): 1 00:23:26.572 Atomic Compare & Write Unit: 1 00:23:26.572 Fused Compare & Write: Not Supported 00:23:26.572 Scatter-Gather List 00:23:26.572 SGL Command Set: Supported 00:23:26.572 SGL Keyed: Not Supported 00:23:26.572 SGL Bit Bucket Descriptor: Not Supported 00:23:26.572 SGL Metadata Pointer: Not Supported 00:23:26.572 Oversized SGL: Not Supported 00:23:26.572 SGL Metadata Address: Not Supported 00:23:26.572 SGL Offset: Supported 00:23:26.572 Transport SGL Data Block: Not Supported 00:23:26.572 Replay Protected Memory Block: Not Supported 00:23:26.572 00:23:26.572 Firmware Slot Information 00:23:26.572 ========================= 00:23:26.572 Active slot: 0 00:23:26.572 00:23:26.572 00:23:26.572 Error Log 00:23:26.572 ========= 00:23:26.572 00:23:26.572 Active Namespaces 00:23:26.572 ================= 00:23:26.572 Discovery Log Page 00:23:26.572 ================== 00:23:26.572 Generation Counter: 2 00:23:26.572 Number of Records: 2 00:23:26.572 Record Format: 0 00:23:26.572 00:23:26.572 Discovery Log Entry 0 00:23:26.572 ---------------------- 00:23:26.572 Transport Type: 3 (TCP) 00:23:26.572 Address Family: 1 (IPv4) 00:23:26.572 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:26.572 Entry Flags: 00:23:26.572 Duplicate Returned Information: 0 00:23:26.572 Explicit Persistent Connection Support for Discovery: 0 00:23:26.572 Transport Requirements: 00:23:26.572 Secure Channel: Not Specified 00:23:26.572 Port ID: 1 (0x0001) 00:23:26.572 Controller ID: 65535 (0xffff) 00:23:26.572 Admin Max SQ Size: 32 00:23:26.572 Transport Service Identifier: 4420 00:23:26.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:26.572 Transport Address: 10.0.0.1 00:23:26.572 Discovery Log Entry 1 00:23:26.572 ---------------------- 00:23:26.572 Transport Type: 3 (TCP) 00:23:26.572 Address Family: 1 (IPv4) 00:23:26.572 Subsystem Type: 2 (NVM Subsystem) 00:23:26.572 Entry Flags: 00:23:26.572 Duplicate Returned Information: 0 00:23:26.572 Explicit Persistent Connection Support for Discovery: 0 00:23:26.572 Transport Requirements: 00:23:26.572 Secure Channel: Not Specified 00:23:26.572 Port ID: 1 (0x0001) 00:23:26.572 Controller ID: 65535 (0xffff) 00:23:26.572 Admin Max SQ Size: 32 00:23:26.572 Transport Service Identifier: 4420 00:23:26.572 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:26.572 Transport Address: 10.0.0.1 00:23:26.572 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:26.834 get_feature(0x01) failed 00:23:26.834 get_feature(0x02) failed 00:23:26.834 get_feature(0x04) failed 00:23:26.834 ===================================================== 00:23:26.834 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:26.834 ===================================================== 00:23:26.834 Controller Capabilities/Features 00:23:26.834 ================================ 00:23:26.834 Vendor ID: 0000 00:23:26.834 Subsystem Vendor ID: 0000 00:23:26.834 Serial Number: f37c05b47c4975069771 00:23:26.834 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:26.834 Firmware Version: 6.8.9-20 00:23:26.834 Recommended Arb Burst: 6 00:23:26.834 IEEE OUI Identifier: 00 00 00 00:23:26.834 Multi-path I/O 00:23:26.834 May have multiple subsystem ports: Yes 00:23:26.834 May have multiple controllers: Yes 00:23:26.834 Associated with SR-IOV VF: No 00:23:26.834 Max Data Transfer Size: Unlimited 00:23:26.834 Max Number of Namespaces: 1024 00:23:26.834 Max Number of I/O Queues: 128 00:23:26.834 NVMe Specification Version (VS): 1.3 00:23:26.834 NVMe Specification Version (Identify): 1.3 00:23:26.834 Maximum Queue Entries: 1024 00:23:26.834 Contiguous Queues Required: No 00:23:26.834 Arbitration Mechanisms Supported 00:23:26.834 Weighted Round Robin: Not Supported 00:23:26.834 Vendor Specific: Not Supported 00:23:26.834 Reset Timeout: 7500 ms 00:23:26.834 Doorbell Stride: 4 bytes 00:23:26.834 NVM Subsystem Reset: Not Supported 00:23:26.834 Command Sets Supported 00:23:26.834 NVM Command Set: Supported 00:23:26.834 Boot Partition: Not Supported 00:23:26.834 Memory Page Size Minimum: 4096 bytes 00:23:26.834 Memory Page Size Maximum: 4096 bytes 00:23:26.834 Persistent Memory Region: Not Supported 00:23:26.834 Optional Asynchronous Events Supported 00:23:26.834 Namespace Attribute Notices: Supported 00:23:26.834 Firmware Activation Notices: Not Supported 00:23:26.834 ANA Change Notices: Supported 00:23:26.834 PLE Aggregate Log Change Notices: Not Supported 00:23:26.834 LBA Status Info Alert Notices: Not Supported 00:23:26.834 EGE Aggregate Log Change Notices: Not Supported 00:23:26.834 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.834 Zone Descriptor Change Notices: Not Supported 00:23:26.834 Discovery Log Change Notices: Not Supported 00:23:26.834 Controller Attributes 00:23:26.834 128-bit Host Identifier: Supported 00:23:26.834 Non-Operational Permissive Mode: Not Supported 00:23:26.834 NVM Sets: Not Supported 00:23:26.834 Read Recovery Levels: Not Supported 00:23:26.834 Endurance Groups: Not Supported 00:23:26.834 Predictable Latency Mode: Not Supported 00:23:26.834 Traffic Based Keep ALive: Supported 00:23:26.834 Namespace Granularity: Not Supported 00:23:26.834 SQ Associations: Not Supported 00:23:26.834 UUID List: Not Supported 00:23:26.834 Multi-Domain Subsystem: Not Supported 00:23:26.834 Fixed Capacity Management: Not Supported 00:23:26.834 Variable Capacity Management: Not Supported 00:23:26.834 Delete Endurance Group: Not Supported 00:23:26.834 Delete NVM Set: Not Supported 00:23:26.834 Extended LBA Formats Supported: Not Supported 00:23:26.834 Flexible Data Placement Supported: Not Supported 00:23:26.834 00:23:26.834 Controller Memory Buffer Support 00:23:26.834 ================================ 00:23:26.834 Supported: No 00:23:26.834 00:23:26.834 Persistent Memory Region Support 00:23:26.834 ================================ 00:23:26.834 Supported: No 00:23:26.834 00:23:26.834 Admin Command Set Attributes 00:23:26.834 ============================ 00:23:26.834 Security Send/Receive: Not Supported 00:23:26.834 Format NVM: Not Supported 00:23:26.834 Firmware Activate/Download: Not Supported 00:23:26.834 Namespace Management: Not Supported 00:23:26.834 Device Self-Test: Not Supported 00:23:26.834 Directives: Not Supported 00:23:26.834 NVMe-MI: Not Supported 00:23:26.834 Virtualization Management: Not Supported 00:23:26.834 Doorbell Buffer Config: Not Supported 00:23:26.834 Get LBA Status Capability: Not Supported 00:23:26.834 Command & Feature Lockdown Capability: Not Supported 00:23:26.834 Abort Command Limit: 4 00:23:26.834 Async Event Request Limit: 4 00:23:26.834 Number of Firmware Slots: N/A 00:23:26.834 Firmware Slot 1 Read-Only: N/A 00:23:26.834 Firmware Activation Without Reset: N/A 00:23:26.834 Multiple Update Detection Support: N/A 00:23:26.834 Firmware Update Granularity: No Information Provided 00:23:26.834 Per-Namespace SMART Log: Yes 00:23:26.834 Asymmetric Namespace Access Log Page: Supported 00:23:26.834 ANA Transition Time : 10 sec 00:23:26.834 00:23:26.834 Asymmetric Namespace Access Capabilities 00:23:26.834 ANA Optimized State : Supported 00:23:26.834 ANA Non-Optimized State : Supported 00:23:26.834 ANA Inaccessible State : Supported 00:23:26.834 ANA Persistent Loss State : Supported 00:23:26.834 ANA Change State : Supported 00:23:26.834 ANAGRPID is not changed : No 00:23:26.834 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:26.834 00:23:26.834 ANA Group Identifier Maximum : 128 00:23:26.834 Number of ANA Group Identifiers : 128 00:23:26.834 Max Number of Allowed Namespaces : 1024 00:23:26.834 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:26.834 Command Effects Log Page: Supported 00:23:26.834 Get Log Page Extended Data: Supported 00:23:26.834 Telemetry Log Pages: Not Supported 00:23:26.834 Persistent Event Log Pages: Not Supported 00:23:26.834 Supported Log Pages Log Page: May Support 00:23:26.834 Commands Supported & Effects Log Page: Not Supported 00:23:26.834 Feature Identifiers & Effects Log Page:May Support 00:23:26.834 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.834 Data Area 4 for Telemetry Log: Not Supported 00:23:26.834 Error Log Page Entries Supported: 128 00:23:26.834 Keep Alive: Supported 00:23:26.834 Keep Alive Granularity: 1000 ms 00:23:26.834 00:23:26.834 NVM Command Set Attributes 00:23:26.834 ========================== 00:23:26.834 Submission Queue Entry Size 00:23:26.834 Max: 64 00:23:26.834 Min: 64 00:23:26.834 Completion Queue Entry Size 00:23:26.834 Max: 16 00:23:26.834 Min: 16 00:23:26.834 Number of Namespaces: 1024 00:23:26.834 Compare Command: Not Supported 00:23:26.834 Write Uncorrectable Command: Not Supported 00:23:26.834 Dataset Management Command: Supported 00:23:26.834 Write Zeroes Command: Supported 00:23:26.834 Set Features Save Field: Not Supported 00:23:26.834 Reservations: Not Supported 00:23:26.834 Timestamp: Not Supported 00:23:26.834 Copy: Not Supported 00:23:26.834 Volatile Write Cache: Present 00:23:26.834 Atomic Write Unit (Normal): 1 00:23:26.834 Atomic Write Unit (PFail): 1 00:23:26.834 Atomic Compare & Write Unit: 1 00:23:26.834 Fused Compare & Write: Not Supported 00:23:26.834 Scatter-Gather List 00:23:26.834 SGL Command Set: Supported 00:23:26.834 SGL Keyed: Not Supported 00:23:26.834 SGL Bit Bucket Descriptor: Not Supported 00:23:26.834 SGL Metadata Pointer: Not Supported 00:23:26.834 Oversized SGL: Not Supported 00:23:26.834 SGL Metadata Address: Not Supported 00:23:26.834 SGL Offset: Supported 00:23:26.834 Transport SGL Data Block: Not Supported 00:23:26.834 Replay Protected Memory Block: Not Supported 00:23:26.834 00:23:26.834 Firmware Slot Information 00:23:26.834 ========================= 00:23:26.834 Active slot: 0 00:23:26.834 00:23:26.834 Asymmetric Namespace Access 00:23:26.834 =========================== 00:23:26.834 Change Count : 0 00:23:26.834 Number of ANA Group Descriptors : 1 00:23:26.834 ANA Group Descriptor : 0 00:23:26.834 ANA Group ID : 1 00:23:26.834 Number of NSID Values : 1 00:23:26.834 Change Count : 0 00:23:26.834 ANA State : 1 00:23:26.834 Namespace Identifier : 1 00:23:26.834 00:23:26.835 Commands Supported and Effects 00:23:26.835 ============================== 00:23:26.835 Admin Commands 00:23:26.835 -------------- 00:23:26.835 Get Log Page (02h): Supported 00:23:26.835 Identify (06h): Supported 00:23:26.835 Abort (08h): Supported 00:23:26.835 Set Features (09h): Supported 00:23:26.835 Get Features (0Ah): Supported 00:23:26.835 Asynchronous Event Request (0Ch): Supported 00:23:26.835 Keep Alive (18h): Supported 00:23:26.835 I/O Commands 00:23:26.835 ------------ 00:23:26.835 Flush (00h): Supported 00:23:26.835 Write (01h): Supported LBA-Change 00:23:26.835 Read (02h): Supported 00:23:26.835 Write Zeroes (08h): Supported LBA-Change 00:23:26.835 Dataset Management (09h): Supported 00:23:26.835 00:23:26.835 Error Log 00:23:26.835 ========= 00:23:26.835 Entry: 0 00:23:26.835 Error Count: 0x3 00:23:26.835 Submission Queue Id: 0x0 00:23:26.835 Command Id: 0x5 00:23:26.835 Phase Bit: 0 00:23:26.835 Status Code: 0x2 00:23:26.835 Status Code Type: 0x0 00:23:26.835 Do Not Retry: 1 00:23:26.835 Error Location: 0x28 00:23:26.835 LBA: 0x0 00:23:26.835 Namespace: 0x0 00:23:26.835 Vendor Log Page: 0x0 00:23:26.835 ----------- 00:23:26.835 Entry: 1 00:23:26.835 Error Count: 0x2 00:23:26.835 Submission Queue Id: 0x0 00:23:26.835 Command Id: 0x5 00:23:26.835 Phase Bit: 0 00:23:26.835 Status Code: 0x2 00:23:26.835 Status Code Type: 0x0 00:23:26.835 Do Not Retry: 1 00:23:26.835 Error Location: 0x28 00:23:26.835 LBA: 0x0 00:23:26.835 Namespace: 0x0 00:23:26.835 Vendor Log Page: 0x0 00:23:26.835 ----------- 00:23:26.835 Entry: 2 00:23:26.835 Error Count: 0x1 00:23:26.835 Submission Queue Id: 0x0 00:23:26.835 Command Id: 0x4 00:23:26.835 Phase Bit: 0 00:23:26.835 Status Code: 0x2 00:23:26.835 Status Code Type: 0x0 00:23:26.835 Do Not Retry: 1 00:23:26.835 Error Location: 0x28 00:23:26.835 LBA: 0x0 00:23:26.835 Namespace: 0x0 00:23:26.835 Vendor Log Page: 0x0 00:23:26.835 00:23:26.835 Number of Queues 00:23:26.835 ================ 00:23:26.835 Number of I/O Submission Queues: 128 00:23:26.835 Number of I/O Completion Queues: 128 00:23:26.835 00:23:26.835 ZNS Specific Controller Data 00:23:26.835 ============================ 00:23:26.835 Zone Append Size Limit: 0 00:23:26.835 00:23:26.835 00:23:26.835 Active Namespaces 00:23:26.835 ================= 00:23:26.835 get_feature(0x05) failed 00:23:26.835 Namespace ID:1 00:23:26.835 Command Set Identifier: NVM (00h) 00:23:26.835 Deallocate: Supported 00:23:26.835 Deallocated/Unwritten Error: Not Supported 00:23:26.835 Deallocated Read Value: Unknown 00:23:26.835 Deallocate in Write Zeroes: Not Supported 00:23:26.835 Deallocated Guard Field: 0xFFFF 00:23:26.835 Flush: Supported 00:23:26.835 Reservation: Not Supported 00:23:26.835 Namespace Sharing Capabilities: Multiple Controllers 00:23:26.835 Size (in LBAs): 1310720 (5GiB) 00:23:26.835 Capacity (in LBAs): 1310720 (5GiB) 00:23:26.835 Utilization (in LBAs): 1310720 (5GiB) 00:23:26.835 UUID: 65e64747-91be-4929-8aec-c6f0c4e46bed 00:23:26.835 Thin Provisioning: Not Supported 00:23:26.835 Per-NS Atomic Units: Yes 00:23:26.835 Atomic Boundary Size (Normal): 0 00:23:26.835 Atomic Boundary Size (PFail): 0 00:23:26.835 Atomic Boundary Offset: 0 00:23:26.835 NGUID/EUI64 Never Reused: No 00:23:26.835 ANA group ID: 1 00:23:26.835 Namespace Write Protected: No 00:23:26.835 Number of LBA Formats: 1 00:23:26.835 Current LBA Format: LBA Format #00 00:23:26.835 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:26.835 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.835 rmmod nvme_tcp 00:23:26.835 rmmod nvme_fabrics 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:26.835 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:27.096 03:07:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:28.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:28.035 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:28.035 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:28.035 00:23:28.035 real 0m3.404s 00:23:28.035 user 0m1.234s 00:23:28.035 sys 0m1.559s 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.035 ************************************ 00:23:28.035 END TEST nvmf_identify_kernel_target 00:23:28.035 ************************************ 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.035 ************************************ 00:23:28.035 START TEST nvmf_auth_host 00:23:28.035 ************************************ 00:23:28.035 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:28.296 * Looking for test storage... 00:23:28.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:28.296 03:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.296 --rc genhtml_branch_coverage=1 00:23:28.296 --rc genhtml_function_coverage=1 00:23:28.296 --rc genhtml_legend=1 00:23:28.296 --rc geninfo_all_blocks=1 00:23:28.296 --rc geninfo_unexecuted_blocks=1 00:23:28.296 00:23:28.296 ' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.296 --rc genhtml_branch_coverage=1 00:23:28.296 --rc genhtml_function_coverage=1 00:23:28.296 --rc genhtml_legend=1 00:23:28.296 --rc geninfo_all_blocks=1 00:23:28.296 --rc geninfo_unexecuted_blocks=1 00:23:28.296 00:23:28.296 ' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.296 --rc genhtml_branch_coverage=1 00:23:28.296 --rc genhtml_function_coverage=1 00:23:28.296 --rc genhtml_legend=1 00:23:28.296 --rc geninfo_all_blocks=1 00:23:28.296 --rc geninfo_unexecuted_blocks=1 00:23:28.296 00:23:28.296 ' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.296 --rc genhtml_branch_coverage=1 00:23:28.296 --rc genhtml_function_coverage=1 00:23:28.296 --rc genhtml_legend=1 00:23:28.296 --rc geninfo_all_blocks=1 00:23:28.296 --rc geninfo_unexecuted_blocks=1 00:23:28.296 00:23:28.296 ' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.296 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.297 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:28.297 Cannot find device "nvmf_init_br" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:28.297 Cannot find device "nvmf_init_br2" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:28.297 Cannot find device "nvmf_tgt_br" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.297 Cannot find device "nvmf_tgt_br2" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:28.297 Cannot find device "nvmf_init_br" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:28.297 Cannot find device "nvmf_init_br2" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:28.297 Cannot find device "nvmf_tgt_br" 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:28.297 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:28.557 Cannot find device "nvmf_tgt_br2" 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:28.557 Cannot find device "nvmf_br" 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:28.557 Cannot find device "nvmf_init_if" 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:28.557 Cannot find device "nvmf_init_if2" 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:28.557 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:28.558 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:28.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:28.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:28.818 00:23:28.818 --- 10.0.0.3 ping statistics --- 00:23:28.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.818 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:28.818 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:28.818 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:23:28.818 00:23:28.818 --- 10.0.0.4 ping statistics --- 00:23:28.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.818 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:28.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:28.818 00:23:28.818 --- 10.0.0.1 ping statistics --- 00:23:28.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.818 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:28.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:28.818 00:23:28.818 --- 10.0.0.2 ping statistics --- 00:23:28.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.818 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=84455 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 84455 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84455 ']' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.818 03:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6eefd4d787caa465da04d665fa5dbd62 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TSM 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6eefd4d787caa465da04d665fa5dbd62 0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6eefd4d787caa465da04d665fa5dbd62 0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6eefd4d787caa465da04d665fa5dbd62 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TSM 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TSM 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TSM 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6766855bfa0ff64a50a24fdbd69abbc84a7a5afaada0982c71bf841e4cbbe0a 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AnK 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6766855bfa0ff64a50a24fdbd69abbc84a7a5afaada0982c71bf841e4cbbe0a 3 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6766855bfa0ff64a50a24fdbd69abbc84a7a5afaada0982c71bf841e4cbbe0a 3 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6766855bfa0ff64a50a24fdbd69abbc84a7a5afaada0982c71bf841e4cbbe0a 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AnK 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AnK 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AnK 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7feea4225522916fd7713516707fc77aaccfe6ee1e504e9 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FIJ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7feea4225522916fd7713516707fc77aaccfe6ee1e504e9 0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7feea4225522916fd7713516707fc77aaccfe6ee1e504e9 0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7feea4225522916fd7713516707fc77aaccfe6ee1e504e9 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FIJ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FIJ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FIJ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4892984e5e0996e3966b0c4cfe30d79786474fc251d249e3 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6zQ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4892984e5e0996e3966b0c4cfe30d79786474fc251d249e3 2 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4892984e5e0996e3966b0c4cfe30d79786474fc251d249e3 2 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4892984e5e0996e3966b0c4cfe30d79786474fc251d249e3 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6zQ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6zQ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6zQ 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7c77d3c3ac9b1b39d5b482d631cfc3d6 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9Ic 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7c77d3c3ac9b1b39d5b482d631cfc3d6 1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7c77d3c3ac9b1b39d5b482d631cfc3d6 1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7c77d3c3ac9b1b39d5b482d631cfc3d6 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9Ic 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9Ic 00:23:30.197 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9Ic 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76cbc718839dafaa0442ee303ea79ef9 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mcB 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76cbc718839dafaa0442ee303ea79ef9 1 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76cbc718839dafaa0442ee303ea79ef9 1 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76cbc718839dafaa0442ee303ea79ef9 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:30.198 03:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mcB 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mcB 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mcB 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72d056323685819f31d1e0982389665fe2f43bd79a4f0cb4 00:23:30.198 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RGH 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72d056323685819f31d1e0982389665fe2f43bd79a4f0cb4 2 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72d056323685819f31d1e0982389665fe2f43bd79a4f0cb4 2 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72d056323685819f31d1e0982389665fe2f43bd79a4f0cb4 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RGH 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RGH 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RGH 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62d1cecef3918491e46b8a8ab7cef943 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1Er 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62d1cecef3918491e46b8a8ab7cef943 0 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62d1cecef3918491e46b8a8ab7cef943 0 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62d1cecef3918491e46b8a8ab7cef943 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1Er 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1Er 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1Er 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc7fc7ad1c6825f27fa8352598e00a656e624a2a24ce93e661869c4dc55fe9dc 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MLo 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc7fc7ad1c6825f27fa8352598e00a656e624a2a24ce93e661869c4dc55fe9dc 3 00:23:30.457 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc7fc7ad1c6825f27fa8352598e00a656e624a2a24ce93e661869c4dc55fe9dc 3 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc7fc7ad1c6825f27fa8352598e00a656e624a2a24ce93e661869c4dc55fe9dc 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MLo 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MLo 00:23:30.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.MLo 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84455 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84455 ']' 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.458 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TSM 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AnK ]] 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AnK 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FIJ 00:23:30.716 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.717 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.717 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6zQ ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6zQ 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9Ic 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mcB ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mcB 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RGH 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1Er ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1Er 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.MLo 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:30.975 03:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:31.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:31.234 Waiting for block devices as requested 00:23:31.234 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:31.494 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:32.063 No valid GPT data, bailing 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:32.063 No valid GPT data, bailing 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:32.063 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:32.322 No valid GPT data, bailing 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:32.322 03:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:32.322 No valid GPT data, bailing 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -a 10.0.0.1 -t tcp -s 4420 00:23:32.323 00:23:32.323 Discovery Log Number of Records 2, Generation counter 2 00:23:32.323 =====Discovery Log Entry 0====== 00:23:32.323 trtype: tcp 00:23:32.323 adrfam: ipv4 00:23:32.323 subtype: current discovery subsystem 00:23:32.323 treq: not specified, sq flow control disable supported 00:23:32.323 portid: 1 00:23:32.323 trsvcid: 4420 00:23:32.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:32.323 traddr: 10.0.0.1 00:23:32.323 eflags: none 00:23:32.323 sectype: none 00:23:32.323 =====Discovery Log Entry 1====== 00:23:32.323 trtype: tcp 00:23:32.323 adrfam: ipv4 00:23:32.323 subtype: nvme subsystem 00:23:32.323 treq: not specified, sq flow control disable supported 00:23:32.323 portid: 1 00:23:32.323 trsvcid: 4420 00:23:32.323 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:32.323 traddr: 10.0.0.1 00:23:32.323 eflags: none 00:23:32.323 sectype: none 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.323 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.582 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 nvme0n1 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.583 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.842 nvme0n1 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.842 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.843 nvme0n1 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.843 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 nvme0n1 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.103 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.104 03:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.364 nvme0n1 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.364 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.623 nvme0n1 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.623 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.624 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.883 nvme0n1 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:33.883 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.884 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.147 nvme0n1 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:34.147 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.148 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.149 03:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 nvme0n1 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 nvme0n1 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.410 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.667 nvme0n1 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.667 03:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.231 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.490 nvme0n1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.490 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.748 nvme0n1 00:23:35.748 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.748 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.748 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.749 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.007 nvme0n1 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.007 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.008 03:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.266 nvme0n1 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.266 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.267 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.525 nvme0n1 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.525 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.792 03:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.164 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.165 03:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.423 nvme0n1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.423 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.991 nvme0n1 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:38.991 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.992 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.251 nvme0n1 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.251 03:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.251 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.510 nvme0n1 00:23:39.510 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.510 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.510 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.510 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.510 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.769 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.770 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.030 nvme0n1 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.030 03:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.598 nvme0n1 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.598 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.599 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.167 nvme0n1 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.167 03:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.167 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.167 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.167 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.167 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.427 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 nvme0n1 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.996 03:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.565 nvme0n1 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.565 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.566 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.136 nvme0n1 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.136 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.136 nvme0n1 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.137 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.397 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 nvme0n1 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.398 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 nvme0n1 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 nvme0n1 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.658 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.918 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.919 nvme0n1 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.919 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 nvme0n1 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.180 03:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.456 nvme0n1 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.456 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.457 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.458 nvme0n1 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.458 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.724 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 nvme0n1 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.725 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.983 nvme0n1 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.983 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.242 nvme0n1 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.242 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.243 03:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.502 nvme0n1 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.502 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.503 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.763 nvme0n1 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.763 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.031 nvme0n1 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.031 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.317 nvme0n1 00:23:46.317 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.317 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.318 03:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.592 nvme0n1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.592 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.851 nvme0n1 00:23:46.851 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.110 03:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.369 nvme0n1 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.369 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.370 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.938 nvme0n1 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.938 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.939 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.198 nvme0n1 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.198 03:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.767 nvme0n1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.767 03:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.335 nvme0n1 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.335 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.903 nvme0n1 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.903 03:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.471 nvme0n1 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.471 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.730 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.730 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.730 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 nvme0n1 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.298 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 nvme0n1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.299 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.558 nvme0n1 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.558 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 nvme0n1 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.559 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.818 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 nvme0n1 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.819 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.078 nvme0n1 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.078 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.079 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.338 nvme0n1 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.338 03:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.338 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.596 nvme0n1 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:52.596 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.597 nvme0n1 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.597 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 nvme0n1 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.856 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.857 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.116 nvme0n1 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.116 03:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 nvme0n1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.376 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.635 nvme0n1 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.635 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.895 nvme0n1 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.895 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.154 nvme0n1 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.154 03:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.414 nvme0n1 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.414 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.978 nvme0n1 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.978 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.979 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.237 nvme0n1 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.237 03:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.237 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.496 nvme0n1 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.496 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:55.754 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.755 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.014 nvme0n1 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.014 03:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.273 nvme0n1 00:23:56.273 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.273 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.273 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.273 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.273 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlZmQ0ZDc4N2NhYTQ2NWRhMDRkNjY1ZmE1ZGJkNjJFG2Rd: 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDY3NjY4NTViZmEwZmY2NGE1MGEyNGZkYmQ2OWFiYmM4NGE3YTVhZmFhZGEwOTgyYzcxYmY4NDFlNGNiYmUwYaxLxWA=: 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.532 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.533 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.101 nvme0n1 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:57.101 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.102 03:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.670 nvme0n1 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.670 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.671 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 nvme0n1 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJkMDU2MzIzNjg1ODE5ZjMxZDFlMDk4MjM4OTY2NWZlMmY0M2JkNzlhNGYwY2I0S5o/Yg==: 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJkMWNlY2VmMzkxODQ5MWU0NmI4YThhYjdjZWY5NDOrbRHZ: 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.239 03:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.807 nvme0n1 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZmM3YWQxYzY4MjVmMjdmYTgzNTI1OThlMDBhNjU2ZTYyNGEyYTI0Y2U5M2U2NjE4NjljNGRjNTVmZTlkYynsDAM=: 00:23:58.807 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.808 03:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 nvme0n1 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 request: 00:23:59.376 { 00:23:59.376 "name": "nvme0", 00:23:59.376 "trtype": "tcp", 00:23:59.376 "traddr": "10.0.0.1", 00:23:59.376 "adrfam": "ipv4", 00:23:59.376 "trsvcid": "4420", 00:23:59.376 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.376 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.376 "prchk_reftag": false, 00:23:59.376 "prchk_guard": false, 00:23:59.376 "hdgst": false, 00:23:59.376 "ddgst": false, 00:23:59.376 "allow_unrecognized_csi": false, 00:23:59.376 "method": "bdev_nvme_attach_controller", 00:23:59.376 "req_id": 1 00:23:59.376 } 00:23:59.376 Got JSON-RPC error response 00:23:59.376 response: 00:23:59.376 { 00:23:59.376 "code": -5, 00:23:59.376 "message": "Input/output error" 00:23:59.376 } 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.376 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.636 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.637 request: 00:23:59.637 { 00:23:59.637 "name": "nvme0", 00:23:59.637 "trtype": "tcp", 00:23:59.637 "traddr": "10.0.0.1", 00:23:59.637 "adrfam": "ipv4", 00:23:59.637 "trsvcid": "4420", 00:23:59.637 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.637 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.637 "prchk_reftag": false, 00:23:59.637 "prchk_guard": false, 00:23:59.637 "hdgst": false, 00:23:59.637 "ddgst": false, 00:23:59.637 "dhchap_key": "key2", 00:23:59.637 "allow_unrecognized_csi": false, 00:23:59.637 "method": "bdev_nvme_attach_controller", 00:23:59.637 "req_id": 1 00:23:59.637 } 00:23:59.637 Got JSON-RPC error response 00:23:59.637 response: 00:23:59.637 { 00:23:59.637 "code": -5, 00:23:59.637 "message": "Input/output error" 00:23:59.637 } 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.637 request: 00:23:59.637 { 00:23:59.637 "name": "nvme0", 00:23:59.637 "trtype": "tcp", 00:23:59.637 "traddr": "10.0.0.1", 00:23:59.637 "adrfam": "ipv4", 00:23:59.637 "trsvcid": "4420", 00:23:59.637 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.637 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.637 "prchk_reftag": false, 00:23:59.637 "prchk_guard": false, 00:23:59.637 "hdgst": false, 00:23:59.637 "ddgst": false, 00:23:59.637 "dhchap_key": "key1", 00:23:59.637 "dhchap_ctrlr_key": "ckey2", 00:23:59.637 "allow_unrecognized_csi": false, 00:23:59.637 "method": "bdev_nvme_attach_controller", 00:23:59.637 "req_id": 1 00:23:59.637 } 00:23:59.637 Got JSON-RPC error response 00:23:59.637 response: 00:23:59.637 { 00:23:59.637 "code": -5, 00:23:59.637 "message": "Input/output error" 00:23:59.637 } 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.637 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.897 nvme0n1 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.897 request: 00:23:59.897 { 00:23:59.897 "name": "nvme0", 00:23:59.897 "dhchap_key": "key1", 00:23:59.897 "dhchap_ctrlr_key": "ckey2", 00:23:59.897 "method": "bdev_nvme_set_keys", 00:23:59.897 "req_id": 1 00:23:59.897 } 00:23:59.897 Got JSON-RPC error response 00:23:59.897 response: 00:23:59.897 { 00:23:59.897 "code": -13, 00:23:59.897 "message": "Permission denied" 00:23:59.897 } 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:59.897 03:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:00.834 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.834 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:00.834 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.834 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.093 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.093 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:01.093 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:01.093 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.093 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdmZWVhNDIyNTUyMjkxNmZkNzcxMzUxNjcwN2ZjNzdhYWNjZmU2ZWUxZTUwNGU5UqXZ0g==: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5Mjk4NGU1ZTA5OTZlMzk2NmIwYzRjZmUzMGQ3OTc4NjQ3NGZjMjUxZDI0OWUz3/JN2A==: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.094 nvme0n1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M3N2QzYzNhYzliMWIzOWQ1YjQ4MmQ2MzFjZmMzZDaOzb0R: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjYmM3MTg4MzlkYWZhYTA0NDJlZTMwM2VhNzllZjnVPNWW: 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.094 request: 00:24:01.094 { 00:24:01.094 "name": "nvme0", 00:24:01.094 "dhchap_key": "key2", 00:24:01.094 "dhchap_ctrlr_key": "ckey1", 00:24:01.094 "method": "bdev_nvme_set_keys", 00:24:01.094 "req_id": 1 00:24:01.094 } 00:24:01.094 Got JSON-RPC error response 00:24:01.094 response: 00:24:01.094 { 00:24:01.094 "code": -13, 00:24:01.094 "message": "Permission denied" 00:24:01.094 } 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:01.094 03:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.472 03:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.472 rmmod nvme_tcp 00:24:02.472 rmmod nvme_fabrics 00:24:02.472 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 84455 ']' 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 84455 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 84455 ']' 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 84455 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84455 00:24:02.473 killing process with pid 84455 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84455' 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 84455 00:24:02.473 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 84455 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:03.043 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:03.302 03:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:03.302 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:03.561 03:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:04.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.130 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:04.130 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:04.390 03:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TSM /tmp/spdk.key-null.FIJ /tmp/spdk.key-sha256.9Ic /tmp/spdk.key-sha384.RGH /tmp/spdk.key-sha512.MLo /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:04.390 03:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:04.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.648 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:04.648 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:04.648 ************************************ 00:24:04.648 END TEST nvmf_auth_host 00:24:04.648 ************************************ 00:24:04.648 00:24:04.648 real 0m36.599s 00:24:04.648 user 0m33.639s 00:24:04.648 sys 0m4.077s 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.648 03:08:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.649 03:08:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.649 ************************************ 00:24:04.649 START TEST nvmf_digest 00:24:04.649 ************************************ 00:24:04.649 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:04.908 * Looking for test storage... 00:24:04.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.908 --rc genhtml_branch_coverage=1 00:24:04.908 --rc genhtml_function_coverage=1 00:24:04.908 --rc genhtml_legend=1 00:24:04.908 --rc geninfo_all_blocks=1 00:24:04.908 --rc geninfo_unexecuted_blocks=1 00:24:04.908 00:24:04.908 ' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.908 --rc genhtml_branch_coverage=1 00:24:04.908 --rc genhtml_function_coverage=1 00:24:04.908 --rc genhtml_legend=1 00:24:04.908 --rc geninfo_all_blocks=1 00:24:04.908 --rc geninfo_unexecuted_blocks=1 00:24:04.908 00:24:04.908 ' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.908 --rc genhtml_branch_coverage=1 00:24:04.908 --rc genhtml_function_coverage=1 00:24:04.908 --rc genhtml_legend=1 00:24:04.908 --rc geninfo_all_blocks=1 00:24:04.908 --rc geninfo_unexecuted_blocks=1 00:24:04.908 00:24:04.908 ' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.908 --rc genhtml_branch_coverage=1 00:24:04.908 --rc genhtml_function_coverage=1 00:24:04.908 --rc genhtml_legend=1 00:24:04.908 --rc geninfo_all_blocks=1 00:24:04.908 --rc geninfo_unexecuted_blocks=1 00:24:04.908 00:24:04.908 ' 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.908 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.909 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:04.909 Cannot find device "nvmf_init_br" 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:04.909 Cannot find device "nvmf_init_br2" 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:04.909 Cannot find device "nvmf_tgt_br" 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:04.909 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.168 Cannot find device "nvmf_tgt_br2" 00:24:05.168 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:05.168 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.169 Cannot find device "nvmf_init_br" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.169 Cannot find device "nvmf_init_br2" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.169 Cannot find device "nvmf_tgt_br" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.169 Cannot find device "nvmf_tgt_br2" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.169 Cannot find device "nvmf_br" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.169 Cannot find device "nvmf_init_if" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.169 Cannot find device "nvmf_init_if2" 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.169 03:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.169 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:05.429 00:24:05.429 --- 10.0.0.3 ping statistics --- 00:24:05.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.429 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.429 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.429 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:05.429 00:24:05.429 --- 10.0.0.4 ping statistics --- 00:24:05.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.429 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:05.429 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:05.429 00:24:05.429 --- 10.0.0.1 ping statistics --- 00:24:05.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.430 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:24:05.430 00:24:05.430 --- 10.0.0.2 ping statistics --- 00:24:05.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.430 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:05.430 ************************************ 00:24:05.430 START TEST nvmf_digest_clean 00:24:05.430 ************************************ 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=86092 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 86092 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86092 ']' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.430 03:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.430 [2024-12-05 03:08:36.252228] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:05.430 [2024-12-05 03:08:36.252395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.689 [2024-12-05 03:08:36.433903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.689 [2024-12-05 03:08:36.513999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.689 [2024-12-05 03:08:36.514298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.689 [2024-12-05 03:08:36.514476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.689 [2024-12-05 03:08:36.514613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.689 [2024-12-05 03:08:36.514665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.689 [2024-12-05 03:08:36.515803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.626 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.626 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:06.626 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.627 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.627 [2024-12-05 03:08:37.429343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.887 null0 00:24:06.887 [2024-12-05 03:08:37.532687] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.887 [2024-12-05 03:08:37.556873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86124 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86124 /var/tmp/bperf.sock 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86124 ']' 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.887 03:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.887 [2024-12-05 03:08:37.674860] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:06.887 [2024-12-05 03:08:37.675294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86124 ] 00:24:07.147 [2024-12-05 03:08:37.854322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.147 [2024-12-05 03:08:37.946234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.084 03:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.084 03:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:08.084 03:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:08.084 03:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:08.084 03:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:08.344 [2024-12-05 03:08:38.992550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.344 03:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.344 03:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.912 nvme0n1 00:24:08.912 03:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:08.912 03:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.912 Running I/O for 2 seconds... 00:24:10.849 14351.00 IOPS, 56.06 MiB/s [2024-12-05T03:08:41.693Z] 14414.50 IOPS, 56.31 MiB/s 00:24:10.849 Latency(us) 00:24:10.849 [2024-12-05T03:08:41.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.849 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:10.849 nvme0n1 : 2.01 14461.59 56.49 0.00 0.00 8844.38 8221.79 22639.71 00:24:10.849 [2024-12-05T03:08:41.693Z] =================================================================================================================== 00:24:10.849 [2024-12-05T03:08:41.693Z] Total : 14461.59 56.49 0.00 0.00 8844.38 8221.79 22639.71 00:24:10.849 { 00:24:10.849 "results": [ 00:24:10.849 { 00:24:10.849 "job": "nvme0n1", 00:24:10.849 "core_mask": "0x2", 00:24:10.849 "workload": "randread", 00:24:10.849 "status": "finished", 00:24:10.849 "queue_depth": 128, 00:24:10.849 "io_size": 4096, 00:24:10.849 "runtime": 2.01112, 00:24:10.849 "iops": 14461.593539918056, 00:24:10.849 "mibps": 56.49059976530491, 00:24:10.849 "io_failed": 0, 00:24:10.849 "io_timeout": 0, 00:24:10.849 "avg_latency_us": 8844.379452119878, 00:24:10.849 "min_latency_us": 8221.789090909091, 00:24:10.849 "max_latency_us": 22639.70909090909 00:24:10.849 } 00:24:10.849 ], 00:24:10.849 "core_count": 1 00:24:10.849 } 00:24:10.849 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:10.849 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:10.849 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:10.849 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:10.849 | select(.opcode=="crc32c") 00:24:10.849 | "\(.module_name) \(.executed)"' 00:24:10.849 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86124 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86124 ']' 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86124 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.108 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86124 00:24:11.367 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.367 killing process with pid 86124 00:24:11.367 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.367 00:24:11.367 Latency(us) 00:24:11.367 [2024-12-05T03:08:42.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.367 [2024-12-05T03:08:42.211Z] =================================================================================================================== 00:24:11.367 [2024-12-05T03:08:42.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.367 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.367 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86124' 00:24:11.367 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86124 00:24:11.367 03:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86124 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86191 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86191 /var/tmp/bperf.sock 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86191 ']' 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.935 03:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.194 [2024-12-05 03:08:42.879638] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:12.194 [2024-12-05 03:08:42.880094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86191 ] 00:24:12.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.194 Zero copy mechanism will not be used. 00:24:12.453 [2024-12-05 03:08:43.059391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.453 [2024-12-05 03:08:43.144859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.021 03:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.021 03:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:13.021 03:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:13.021 03:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:13.021 03:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.587 [2024-12-05 03:08:44.155758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:13.587 03:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.587 03:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.847 nvme0n1 00:24:13.847 03:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:13.847 03:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.106 Zero copy mechanism will not be used. 00:24:14.106 Running I/O for 2 seconds... 00:24:15.970 7184.00 IOPS, 898.00 MiB/s [2024-12-05T03:08:46.814Z] 7216.00 IOPS, 902.00 MiB/s 00:24:15.970 Latency(us) 00:24:15.970 [2024-12-05T03:08:46.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.970 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:15.970 nvme0n1 : 2.00 7215.60 901.95 0.00 0.00 2213.63 2010.76 7238.75 00:24:15.970 [2024-12-05T03:08:46.814Z] =================================================================================================================== 00:24:15.970 [2024-12-05T03:08:46.814Z] Total : 7215.60 901.95 0.00 0.00 2213.63 2010.76 7238.75 00:24:15.970 { 00:24:15.970 "results": [ 00:24:15.970 { 00:24:15.970 "job": "nvme0n1", 00:24:15.970 "core_mask": "0x2", 00:24:15.970 "workload": "randread", 00:24:15.970 "status": "finished", 00:24:15.970 "queue_depth": 16, 00:24:15.970 "io_size": 131072, 00:24:15.970 "runtime": 2.002327, 00:24:15.970 "iops": 7215.6046439967095, 00:24:15.970 "mibps": 901.9505804995887, 00:24:15.970 "io_failed": 0, 00:24:15.970 "io_timeout": 0, 00:24:15.970 "avg_latency_us": 2213.6306855934763, 00:24:15.970 "min_latency_us": 2010.7636363636364, 00:24:15.970 "max_latency_us": 7238.749090909091 00:24:15.970 } 00:24:15.970 ], 00:24:15.970 "core_count": 1 00:24:15.970 } 00:24:15.970 03:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:15.970 03:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:15.970 03:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:15.970 03:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:15.970 | select(.opcode=="crc32c") 00:24:15.970 | "\(.module_name) \(.executed)"' 00:24:15.970 03:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86191 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86191 ']' 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86191 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86191 00:24:16.228 killing process with pid 86191 00:24:16.228 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.228 00:24:16.228 Latency(us) 00:24:16.228 [2024-12-05T03:08:47.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.228 [2024-12-05T03:08:47.072Z] =================================================================================================================== 00:24:16.228 [2024-12-05T03:08:47.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86191' 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86191 00:24:16.228 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86191 00:24:17.161 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86260 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86260 /var/tmp/bperf.sock 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86260 ']' 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.162 03:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.162 [2024-12-05 03:08:47.967494] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:17.162 [2024-12-05 03:08:47.968134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86260 ] 00:24:17.420 [2024-12-05 03:08:48.149511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.420 [2024-12-05 03:08:48.230111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.354 03:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.354 03:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:18.354 03:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:18.354 03:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.354 03:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:18.612 [2024-12-05 03:08:49.255340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.612 03:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.612 03:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.869 nvme0n1 00:24:18.869 03:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:18.869 03:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.126 Running I/O for 2 seconds... 00:24:20.993 15622.00 IOPS, 61.02 MiB/s [2024-12-05T03:08:51.837Z] 15812.00 IOPS, 61.77 MiB/s 00:24:20.993 Latency(us) 00:24:20.993 [2024-12-05T03:08:51.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.993 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:20.993 nvme0n1 : 2.01 15790.97 61.68 0.00 0.00 8098.67 2666.12 17635.14 00:24:20.993 [2024-12-05T03:08:51.837Z] =================================================================================================================== 00:24:20.993 [2024-12-05T03:08:51.837Z] Total : 15790.97 61.68 0.00 0.00 8098.67 2666.12 17635.14 00:24:20.993 { 00:24:20.993 "results": [ 00:24:20.993 { 00:24:20.993 "job": "nvme0n1", 00:24:20.993 "core_mask": "0x2", 00:24:20.993 "workload": "randwrite", 00:24:20.993 "status": "finished", 00:24:20.993 "queue_depth": 128, 00:24:20.993 "io_size": 4096, 00:24:20.993 "runtime": 2.01077, 00:24:20.993 "iops": 15790.965649974885, 00:24:20.993 "mibps": 61.683459570214396, 00:24:20.993 "io_failed": 0, 00:24:20.993 "io_timeout": 0, 00:24:20.993 "avg_latency_us": 8098.668682287731, 00:24:20.993 "min_latency_us": 2666.1236363636363, 00:24:20.993 "max_latency_us": 17635.14181818182 00:24:20.993 } 00:24:20.993 ], 00:24:20.993 "core_count": 1 00:24:20.993 } 00:24:20.993 03:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:20.993 03:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:20.993 03:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:20.993 03:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:20.993 | select(.opcode=="crc32c") 00:24:20.993 | "\(.module_name) \(.executed)"' 00:24:20.993 03:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:21.251 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:21.251 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86260 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86260 ']' 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86260 00:24:21.252 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86260 00:24:21.510 killing process with pid 86260 00:24:21.510 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.510 00:24:21.510 Latency(us) 00:24:21.510 [2024-12-05T03:08:52.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.510 [2024-12-05T03:08:52.354Z] =================================================================================================================== 00:24:21.510 [2024-12-05T03:08:52.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86260' 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86260 00:24:21.510 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86260 00:24:22.447 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:22.447 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:22.447 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:22.447 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86330 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86330 /var/tmp/bperf.sock 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86330 ']' 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.448 03:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:22.448 [2024-12-05 03:08:53.044931] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:22.448 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.448 Zero copy mechanism will not be used. 00:24:22.448 [2024-12-05 03:08:53.045122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86330 ] 00:24:22.448 [2024-12-05 03:08:53.227419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.707 [2024-12-05 03:08:53.311021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.275 03:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.275 03:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:23.275 03:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:23.275 03:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:23.275 03:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:23.534 [2024-12-05 03:08:54.344846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:23.793 03:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.793 03:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.053 nvme0n1 00:24:24.053 03:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:24.053 03:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.053 Zero copy mechanism will not be used. 00:24:24.053 Running I/O for 2 seconds... 00:24:26.370 5727.00 IOPS, 715.88 MiB/s [2024-12-05T03:08:57.214Z] 5746.50 IOPS, 718.31 MiB/s 00:24:26.370 Latency(us) 00:24:26.370 [2024-12-05T03:08:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.370 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:26.370 nvme0n1 : 2.00 5743.61 717.95 0.00 0.00 2778.11 1608.61 4796.04 00:24:26.370 [2024-12-05T03:08:57.214Z] =================================================================================================================== 00:24:26.370 [2024-12-05T03:08:57.214Z] Total : 5743.61 717.95 0.00 0.00 2778.11 1608.61 4796.04 00:24:26.370 { 00:24:26.370 "results": [ 00:24:26.370 { 00:24:26.370 "job": "nvme0n1", 00:24:26.370 "core_mask": "0x2", 00:24:26.370 "workload": "randwrite", 00:24:26.370 "status": "finished", 00:24:26.370 "queue_depth": 16, 00:24:26.370 "io_size": 131072, 00:24:26.370 "runtime": 2.003793, 00:24:26.370 "iops": 5743.6072488525515, 00:24:26.370 "mibps": 717.9509061065689, 00:24:26.370 "io_failed": 0, 00:24:26.370 "io_timeout": 0, 00:24:26.370 "avg_latency_us": 2778.106443178856, 00:24:26.370 "min_latency_us": 1608.610909090909, 00:24:26.370 "max_latency_us": 4796.043636363636 00:24:26.370 } 00:24:26.370 ], 00:24:26.370 "core_count": 1 00:24:26.370 } 00:24:26.370 03:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:26.370 03:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:26.370 03:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:26.370 03:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:26.370 03:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:26.370 | select(.opcode=="crc32c") 00:24:26.370 | "\(.module_name) \(.executed)"' 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86330 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86330 ']' 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86330 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86330 00:24:26.370 killing process with pid 86330 00:24:26.370 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.370 00:24:26.370 Latency(us) 00:24:26.370 [2024-12-05T03:08:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.370 [2024-12-05T03:08:57.214Z] =================================================================================================================== 00:24:26.370 [2024-12-05T03:08:57.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86330' 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86330 00:24:26.370 03:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86330 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86092 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86092 ']' 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86092 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86092 00:24:27.309 killing process with pid 86092 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86092' 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86092 00:24:27.309 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86092 00:24:28.246 00:24:28.246 real 0m22.720s 00:24:28.246 user 0m43.876s 00:24:28.246 sys 0m4.603s 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.246 ************************************ 00:24:28.246 END TEST nvmf_digest_clean 00:24:28.246 ************************************ 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.246 ************************************ 00:24:28.246 START TEST nvmf_digest_error 00:24:28.246 ************************************ 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=86429 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 86429 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86429 ']' 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.246 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.247 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.247 03:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.247 [2024-12-05 03:08:59.029060] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:28.247 [2024-12-05 03:08:59.029249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.505 [2024-12-05 03:08:59.204813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.505 [2024-12-05 03:08:59.285057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.505 [2024-12-05 03:08:59.285114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.505 [2024-12-05 03:08:59.285148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.505 [2024-12-05 03:08:59.285170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.505 [2024-12-05 03:08:59.285182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.505 [2024-12-05 03:08:59.286259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.071 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.071 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:29.071 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.071 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.071 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.330 [2024-12-05 03:08:59.955157] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.330 03:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.330 [2024-12-05 03:09:00.114692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:29.589 null0 00:24:29.589 [2024-12-05 03:09:00.219688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.589 [2024-12-05 03:09:00.244013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.589 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86462 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86462 /var/tmp/bperf.sock 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86462 ']' 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.590 03:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 [2024-12-05 03:09:00.342652] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:29.590 [2024-12-05 03:09:00.342836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86462 ] 00:24:29.849 [2024-12-05 03:09:00.521551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.849 [2024-12-05 03:09:00.645581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.108 [2024-12-05 03:09:00.836166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.676 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.935 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.935 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.935 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.194 nvme0n1 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:31.194 03:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:31.194 Running I/O for 2 seconds... 00:24:31.194 [2024-12-05 03:09:02.007550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.194 [2024-12-05 03:09:02.007644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.194 [2024-12-05 03:09:02.007671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.194 [2024-12-05 03:09:02.026198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.194 [2024-12-05 03:09:02.026284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.194 [2024-12-05 03:09:02.026305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.045486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.045571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.045590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.063073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.063155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.063180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.081060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.081148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.081194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.099918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.099996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.100019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.118638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.118718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.118741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.137299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.137387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.137407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.155809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.155897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.155921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.174810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.174940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.174978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.194517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.194596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.194632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.214003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.214074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.233545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.233647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.233669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.253060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.253139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.253162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.272623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.272722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.272742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.454 [2024-12-05 03:09:02.292425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.454 [2024-12-05 03:09:02.292506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.454 [2024-12-05 03:09:02.292545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.312826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.312910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.312945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.332505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.332585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.332622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.351581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.351667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.351687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.370402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.370519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.389067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.389153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.389173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.407759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.407846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.407870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.428063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.428145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.428184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.448825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.448921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.448943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.468322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.468400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.468440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.487840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.487925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.713 [2024-12-05 03:09:02.487946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.713 [2024-12-05 03:09:02.507169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.713 [2024-12-05 03:09:02.507225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.714 [2024-12-05 03:09:02.507250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.714 [2024-12-05 03:09:02.526429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.714 [2024-12-05 03:09:02.526514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.714 [2024-12-05 03:09:02.526534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.714 [2024-12-05 03:09:02.545385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.714 [2024-12-05 03:09:02.545463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.714 [2024-12-05 03:09:02.545485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.972 [2024-12-05 03:09:02.564138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.972 [2024-12-05 03:09:02.564223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.972 [2024-12-05 03:09:02.564243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.972 [2024-12-05 03:09:02.581592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.581688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.581707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.599748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.599841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.599882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.618888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.618972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.618995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.637289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.637366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.637390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.654952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.655037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.672636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.672722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.672743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.692085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.692163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.692186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.711783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.711881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.711904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.730014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.730091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.730113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.747659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.747741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.747760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.765088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.765189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.765208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.784976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.785057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.973 [2024-12-05 03:09:02.804900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.973 [2024-12-05 03:09:02.804986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.973 [2024-12-05 03:09:02.805008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.824878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.824955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.824981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.844853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.844938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.844959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.864885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.864967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.864990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.883679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.883763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.883795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.902267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.902345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.902368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.921893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.921967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.921988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.940569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.940648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.940672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.959881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.959998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.960025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 13157.00 IOPS, 51.39 MiB/s [2024-12-05T03:09:03.076Z] [2024-12-05 03:09:02.978756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.978853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.978874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:02.997350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:02.997429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:02.997452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:03.017164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:03.017254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:03.017275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:03.036151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:03.036230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:03.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:03.054391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:03.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.232 [2024-12-05 03:09:03.054496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.232 [2024-12-05 03:09:03.074308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.232 [2024-12-05 03:09:03.074392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.233 [2024-12-05 03:09:03.074413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.093328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.093407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.093430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.112267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.112352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.112372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.131089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.131146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.131171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.149728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.149824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.149845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.168147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.168225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.168247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.186490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.186569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.186591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.212939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.213003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.213025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.231311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.231381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.231401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.250121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.250185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.250207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.268830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.268910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.268936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.287317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.287402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.287422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.305540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.305620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.305642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.492 [2024-12-05 03:09:03.324227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.492 [2024-12-05 03:09:03.324297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.492 [2024-12-05 03:09:03.324316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.343582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.343648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.343668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.361992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.362054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.362076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.380484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.380569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.380596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.398898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.398973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.398997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.417168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.417287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.436205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.436277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.436297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.457176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.457254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.457280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.477233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.477303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.477323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.496686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.496788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.496822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.515197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.515302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.515322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.533622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.533694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.533716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.552041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.552108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.552127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.570171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.570240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.570259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.752 [2024-12-05 03:09:03.588593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.752 [2024-12-05 03:09:03.588672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.752 [2024-12-05 03:09:03.588694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.607818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.607911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.607932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.626127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.626205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.626227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.644692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.644754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.644806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.663057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.663117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.663138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.681246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.681330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.699785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.699880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.699901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.718251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.718329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.718348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.736738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.736842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.736862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.755133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.755186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.755205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.773347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.773409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.791737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.791809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.791828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.809873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.809967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.827533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.012 [2024-12-05 03:09:03.827609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.012 [2024-12-05 03:09:03.827627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.012 [2024-12-05 03:09:03.845257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.013 [2024-12-05 03:09:03.845335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.013 [2024-12-05 03:09:03.845353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.864603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.864683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.864701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.883172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.883255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.883319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.901409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.901485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.901503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.918987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.936573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.936650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.936672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.953989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.954083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 [2024-12-05 03:09:03.971390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.272 [2024-12-05 03:09:03.971467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.272 [2024-12-05 03:09:03.971485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.272 13409.50 IOPS, 52.38 MiB/s 00:24:33.272 Latency(us) 00:24:33.272 [2024-12-05T03:09:04.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.272 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:33.272 nvme0n1 : 2.01 13423.14 52.43 0.00 0.00 9528.76 8281.37 35508.60 00:24:33.272 [2024-12-05T03:09:04.116Z] =================================================================================================================== 00:24:33.272 [2024-12-05T03:09:04.116Z] Total : 13423.14 52.43 0.00 0.00 9528.76 8281.37 35508.60 00:24:33.272 { 00:24:33.272 "results": [ 00:24:33.272 { 00:24:33.272 "job": "nvme0n1", 00:24:33.272 "core_mask": "0x2", 00:24:33.272 "workload": "randread", 00:24:33.272 "status": "finished", 00:24:33.272 "queue_depth": 128, 00:24:33.272 "io_size": 4096, 00:24:33.272 "runtime": 2.007504, 00:24:33.272 "iops": 13423.136392256254, 00:24:33.272 "mibps": 52.43412653225099, 00:24:33.272 "io_failed": 0, 00:24:33.272 "io_timeout": 0, 00:24:33.272 "avg_latency_us": 9528.763759703392, 00:24:33.272 "min_latency_us": 8281.367272727273, 00:24:33.272 "max_latency_us": 35508.59636363637 00:24:33.272 } 00:24:33.272 ], 00:24:33.272 "core_count": 1 00:24:33.272 } 00:24:33.272 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:33.272 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:33.272 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:33.272 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:33.272 | .driver_specific 00:24:33.272 | .nvme_error 00:24:33.272 | .status_code 00:24:33.272 | .command_transient_transport_error' 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 105 > 0 )) 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86462 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86462 ']' 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86462 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86462 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.531 killing process with pid 86462 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86462' 00:24:33.531 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.531 00:24:33.531 Latency(us) 00:24:33.531 [2024-12-05T03:09:04.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.531 [2024-12-05T03:09:04.375Z] =================================================================================================================== 00:24:33.531 [2024-12-05T03:09:04.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86462 00:24:33.531 03:09:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86462 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86528 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86528 /var/tmp/bperf.sock 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86528 ']' 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.467 03:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.467 [2024-12-05 03:09:05.263322] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:34.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:34.467 Zero copy mechanism will not be used. 00:24:34.467 [2024-12-05 03:09:05.263508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86528 ] 00:24:34.725 [2024-12-05 03:09:05.443421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.725 [2024-12-05 03:09:05.534637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.983 [2024-12-05 03:09:05.684210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:35.549 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.549 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:35.549 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.549 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.808 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.101 nvme0n1 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:36.101 03:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:36.101 Zero copy mechanism will not be used. 00:24:36.101 Running I/O for 2 seconds... 00:24:36.101 [2024-12-05 03:09:06.891459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.101 [2024-12-05 03:09:06.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.101 [2024-12-05 03:09:06.891580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.101 [2024-12-05 03:09:06.896549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.101 [2024-12-05 03:09:06.896636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.101 [2024-12-05 03:09:06.896656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.101 [2024-12-05 03:09:06.901639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.101 [2024-12-05 03:09:06.901725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.101 [2024-12-05 03:09:06.901746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.101 [2024-12-05 03:09:06.906565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.101 [2024-12-05 03:09:06.906645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.101 [2024-12-05 03:09:06.906668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.101 [2024-12-05 03:09:06.912803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.101 [2024-12-05 03:09:06.912929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.102 [2024-12-05 03:09:06.912956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.102 [2024-12-05 03:09:06.918565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.102 [2024-12-05 03:09:06.918657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.102 [2024-12-05 03:09:06.918679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.924635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.924704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.924728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.930522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.930607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.930628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.935563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.935641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.935666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.940671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.940750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.940783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.945514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.945598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.945618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.950635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.950714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.950736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.955596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.955675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.955697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.960537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.960626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.960647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.965472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.965557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.965577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.970572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.970635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.389 [2024-12-05 03:09:06.970658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.389 [2024-12-05 03:09:06.975623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.389 [2024-12-05 03:09:06.975687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:06.975709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:06.980654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:06.980744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:06.980765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:06.985645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:06.985734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:06.985755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:06.990642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:06.990721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:06.990744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:06.995533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:06.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:06.995635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.000439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.000523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.000542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.005355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.005445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.005465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.010190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.010267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.010289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.015032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.015100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.015124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.019886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.019971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.019991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.024626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.024709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.024728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.029431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.029509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.029534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.034203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.034281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.034303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.038997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.039095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.043886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.043979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.043999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.048717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.048805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.048832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.053554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.053632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.053654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.058335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.058418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.058438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.063149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.063267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.063318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.068001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.068078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.068101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.072764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.072841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.077572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.077658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.390 [2024-12-05 03:09:07.077677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.390 [2024-12-05 03:09:07.082400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.390 [2024-12-05 03:09:07.082484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.082503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.087198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.087293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.087330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.092009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.092086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.092110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.096721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.096827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.096849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.101511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.101591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.101610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.106310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.106388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.106410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.111035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.111124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.111144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.115797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.115895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.115915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.120585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.120662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.120684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.125414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.125491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.130198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.130299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.134956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.135046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.135066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.139686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.139762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.139798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.144463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.144540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.144562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.149884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.149985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.156115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.156222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.156245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.161236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.161315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.161340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.166131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.166223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.171031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.171118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.171139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.175838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.175939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.180504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.180581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.180603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.185284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.391 [2024-12-05 03:09:07.185362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.391 [2024-12-05 03:09:07.185384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.391 [2024-12-05 03:09:07.190071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.190190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.194843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.194951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.194990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.199617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.199694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.199716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.204334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.204411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.204435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.209133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.209215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.209235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.213918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.214014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.218693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.218782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.218807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.223572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.223649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.223671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.392 [2024-12-05 03:09:07.228666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.392 [2024-12-05 03:09:07.228755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.392 [2024-12-05 03:09:07.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.234013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.234098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.234118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.239435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.239512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.239534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.244279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.244356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.244379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.249003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.249087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.253792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.253876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.253895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.258533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.258610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.263427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.263504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.263526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.268309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.268393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.268413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.273038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.273123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.273142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.277780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.277878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.282570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.282646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.282669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.652 [2024-12-05 03:09:07.287322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.652 [2024-12-05 03:09:07.287422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.652 [2024-12-05 03:09:07.287442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.292188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.292288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.296908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.297007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.301780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.301878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.301898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.306510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.306592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.306611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.311261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.311367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.311390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.316097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.316160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.316186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.320942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.321025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.321044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.325827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.325913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.325933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.330647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.330746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.335500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.335578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.335600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.340493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.340579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.340599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.345365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.345449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.345468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.350142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.350219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.350244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.354878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.354979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.355020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.359650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.359735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.359754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.364496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.364579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.364598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.369292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.369368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.369390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.374025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.374124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.378880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.379003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.379024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.383584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.383670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.383690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.388478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.388555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.653 [2024-12-05 03:09:07.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.653 [2024-12-05 03:09:07.393334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.653 [2024-12-05 03:09:07.393411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.393433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.398099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.398182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.398201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.402933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.403033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.403054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.407726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.407841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.412518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.412621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.417402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.417505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.422102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.422178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.422201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.426887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.426987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.427013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.431629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.431721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.431742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.436400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.436483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.436502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.441152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.441245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.441267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.445921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.445997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.446020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.450690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.450789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.450812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.455509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.455622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.460474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.460551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.460573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.465277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.465354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.469955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.470038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.470058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.474648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.474742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.479510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.479588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.479610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.484269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.484344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.484368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.654 [2024-12-05 03:09:07.489149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.654 [2024-12-05 03:09:07.489251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.654 [2024-12-05 03:09:07.489271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.494574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.494679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.494700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.499602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.499697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.504575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.504653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.509317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.509394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.509412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.514214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.514324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.519013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.519093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.519112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.523734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.523821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.523841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.528695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.528802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.528823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.534072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.534151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.534200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.539596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.539660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.539694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.545059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.545138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.545187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.550694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.550757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.550806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.556147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.556291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.561410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.561487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.566577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.566654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.566672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.571899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.571965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.571985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.577002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.577081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.577099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.915 [2024-12-05 03:09:07.581944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.915 [2024-12-05 03:09:07.582021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.915 [2024-12-05 03:09:07.582040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.586637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.586715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.586733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.591536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.591614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.591633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.596347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.596424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.596443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.601041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.601117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.601136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.605743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.605830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.605848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.610388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.610464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.615352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.615429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.615447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.620199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.620276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.620294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.625032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.625108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.625127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.629725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.629812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.634426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.634504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.634523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.639292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.639384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.639402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.644033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.644110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.644129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.648766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.648841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.648859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.653509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.653586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.658224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.658320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.662877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.662979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.663015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.667711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.667818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.667838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.672537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.672615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.672633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.677362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.677440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.677459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.682066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.682142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.682160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.686681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.686757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.686804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.691557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.691635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.691653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.696434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.696512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.696530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.701224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.701301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.701319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.705941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.706018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.916 [2024-12-05 03:09:07.706036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.916 [2024-12-05 03:09:07.710620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.916 [2024-12-05 03:09:07.710697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.710715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.715565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.715643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.715662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.720412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.720490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.720508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.725166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.725243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.729965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.730042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.730061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.734697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.734802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.734824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.739839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.739932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.739952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.744799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.744877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.744895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.749602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.749680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.749698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.917 [2024-12-05 03:09:07.754650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.917 [2024-12-05 03:09:07.754728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.917 [2024-12-05 03:09:07.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.759835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.759926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.759945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.764922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.764999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.765018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.769695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.769806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.774420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.774498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.774516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.779318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.779396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.784072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.784149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.784168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.788797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.788872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.788890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.793598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.793677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.793695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.798371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.798449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.798467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.803296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.803374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.803393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.808045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.808124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.808142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.812676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.812754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.812784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.817476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.817554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.817572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.178 [2024-12-05 03:09:07.822171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.178 [2024-12-05 03:09:07.822249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.178 [2024-12-05 03:09:07.822267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.826809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.826887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.826906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.831556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.831633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.831651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.836410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.836488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.836507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.841203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.841279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.841297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.846098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.846177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.846195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.850770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.850865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.855593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.855670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.855688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.860825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.860940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.865976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.866056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.866076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.871140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.871238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.871272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.876698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.876805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.876844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.882075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.882156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.882191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 6277.00 IOPS, 784.62 MiB/s [2024-12-05T03:09:08.023Z] [2024-12-05 03:09:07.889112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.889222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.894257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.894354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.899475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.899573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.904664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.904743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.904778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.910124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.910218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.910251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.915135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.915216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.915252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.920068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.920147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.920165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.924918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.924981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.925016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.929897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.929981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.930002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.934867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.934970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.935007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.939807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.939917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.944749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.944859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.949581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.949661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.949680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.954520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.179 [2024-12-05 03:09:07.954599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.179 [2024-12-05 03:09:07.954617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.179 [2024-12-05 03:09:07.959558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.959637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.959656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.964510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.964589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.964608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.969415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.969495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.969514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.974420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.974499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.974517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.979327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.979405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.979424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.984155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.984234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.984252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.989099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.989178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.989197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.994263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.994344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.994363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:07.999155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:07.999248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:07.999284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:08.004056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:08.004134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:08.004154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:08.008951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:08.009030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:08.009048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:08.013958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:08.014038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:08.014058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.180 [2024-12-05 03:09:08.019131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.180 [2024-12-05 03:09:08.019214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.180 [2024-12-05 03:09:08.019250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.440 [2024-12-05 03:09:08.024219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.440 [2024-12-05 03:09:08.024298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.440 [2024-12-05 03:09:08.024317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.440 [2024-12-05 03:09:08.029300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.440 [2024-12-05 03:09:08.029380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.440 [2024-12-05 03:09:08.029399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.440 [2024-12-05 03:09:08.034416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.440 [2024-12-05 03:09:08.034496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.440 [2024-12-05 03:09:08.034515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.039492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.039573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.039592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.044247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.044325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.044344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.049176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.049257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.049275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.054292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.054371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.054390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.059230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.059361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.064161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.064260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.068958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.069036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.069056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.073951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.074014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.074048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.078800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.078879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.078898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.083719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.083811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.083831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.088643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.088723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.088759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.093758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.093866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.093886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.098557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.098634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.098663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.103626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.103705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.103724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.108651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.108728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.108747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.113268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.113343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.113362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.117997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.118074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.118092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.122718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.122821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.122842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.127510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.127587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.127605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.132224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.132303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.132321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.137095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.137172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.137190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.141799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.141876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.141894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.146507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.146603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.151309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.151403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.155990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.156066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.156084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.160701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.160787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.441 [2024-12-05 03:09:08.160807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.441 [2024-12-05 03:09:08.165456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.441 [2024-12-05 03:09:08.165534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.165553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.170198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.170276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.170294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.174833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.174919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.174955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.179576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.179672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.184286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.184363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.184381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.189099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.189176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.189194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.193898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.193974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.193992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.198539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.198616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.198635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.203378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.203456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.203474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.208091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.208171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.208189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.212818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.212896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.212914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.217584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.217662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.222307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.222384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.222402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.227040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.227119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.227139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.231804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.231895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.231914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.236561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.236638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.241413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.241509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.246056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.246133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.246152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.250781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.250859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.250877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.255399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.255476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.255494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.260218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.260296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.260315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.264862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.264939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.264958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.269590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.269668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.269687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.274440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.274519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.274537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.442 [2024-12-05 03:09:08.279736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.442 [2024-12-05 03:09:08.279814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.442 [2024-12-05 03:09:08.279834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.702 [2024-12-05 03:09:08.285038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.702 [2024-12-05 03:09:08.285101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.702 [2024-12-05 03:09:08.285119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.702 [2024-12-05 03:09:08.290229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.702 [2024-12-05 03:09:08.290308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.702 [2024-12-05 03:09:08.290327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.702 [2024-12-05 03:09:08.295287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.702 [2024-12-05 03:09:08.295395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.702 [2024-12-05 03:09:08.295414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.702 [2024-12-05 03:09:08.300173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.702 [2024-12-05 03:09:08.300251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.702 [2024-12-05 03:09:08.300269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.702 [2024-12-05 03:09:08.304995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.702 [2024-12-05 03:09:08.305072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.702 [2024-12-05 03:09:08.305090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.309789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.309866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.309884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.314590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.314665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.314683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.319652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.319748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.324382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.324460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.324479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.329258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.329335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.329353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.333977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.334054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.334073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.338680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.338760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.338807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.343447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.343525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.343543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.348326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.348403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.348422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.353128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.353204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.357995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.358072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.358091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.362576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.367389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.367465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.367483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.372184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.372261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.372279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.376960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.377037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.377056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.381648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.381725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.381743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.386350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.386427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.386446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.391291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.391383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.391401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.395988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.396065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.396084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.400662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.400740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.400758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.405310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.405386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.405404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.410002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.410078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.410095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.414694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.414795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.414832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.419419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.419496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.419514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.424171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.424249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.424267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.429084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.429161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.703 [2024-12-05 03:09:08.429179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.703 [2024-12-05 03:09:08.433862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.703 [2024-12-05 03:09:08.433941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.433959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.438582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.438659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.438678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.443276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.443368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.443386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.448060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.448137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.448155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.452700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.452786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.452806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.457337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.457414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.457432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.462102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.462179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.462197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.466798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.466876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.471587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.471665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.471684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.476360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.476437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.476455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.481161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.481257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.485857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.485934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.485952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.490525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.490603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.490621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.495271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.495362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.495380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.499975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.500053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.500071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.504768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.504844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.504863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.509455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.509532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.509550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.514191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.514268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.514286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.518831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.518926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.518963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.523602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.523678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.528422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.528499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.528517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.533256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.533333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.533352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.537933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.538010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.538028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.704 [2024-12-05 03:09:08.543031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.704 [2024-12-05 03:09:08.543113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.704 [2024-12-05 03:09:08.543133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.548241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.548319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.548337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.553579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.553660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.553694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.558896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.558985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.559006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.564524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.564604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.564623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.570078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.570129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.570179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.575717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.575825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.575848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.581093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.581204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.581237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.586366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.586442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.586460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.591619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.591697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.596851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.596931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.596951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.601961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.607020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.607100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.607119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.612036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.612116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.612135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.616858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.616919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.616953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.621632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.621710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.621729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.626407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.626483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.626501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.631180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.631274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.631307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.635929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.636025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.640714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.640800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.640819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.645504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.645581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.645599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.650430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.650507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.655460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.655524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.660281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.660358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.660376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.664921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.664998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.964 [2024-12-05 03:09:08.665017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.964 [2024-12-05 03:09:08.669644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.964 [2024-12-05 03:09:08.669721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.669739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.674326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.674403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.674421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.679076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.679156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.679175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.683869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.683944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.683963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.688548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.688644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.693388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.693482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.698173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.698249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.698267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.702884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.703005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.707642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.707728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.707758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.712528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.712605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.712624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.717591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.717653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.717671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.722460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.722538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.722556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.727256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.727374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.727394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.732102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.732196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.732214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.736859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.736935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.736953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.741530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.741607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.741625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.746412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.746490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.746509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.751345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.751422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.751441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.756140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.756218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.756251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.761444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.761521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.766407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.766484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.766501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.771235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.771343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.771376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.776098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.776175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.776193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.780890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.780987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.785708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.785797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.785817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.790488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.790564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.790582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.795371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.965 [2024-12-05 03:09:08.795447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.965 [2024-12-05 03:09:08.795464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.965 [2024-12-05 03:09:08.800181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.966 [2024-12-05 03:09:08.800257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.966 [2024-12-05 03:09:08.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.966 [2024-12-05 03:09:08.805344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.966 [2024-12-05 03:09:08.805424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.966 [2024-12-05 03:09:08.805443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.810393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.810472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.815424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.815501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.815520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.820311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.820388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.820406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.825177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.825255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.825273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.829964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.830041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.830058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.834792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.834869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.834887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.839628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.839705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.839724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.844384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.844477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.844495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.849180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.849258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.849275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.853867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.853943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.853961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.225 [2024-12-05 03:09:08.858617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.225 [2024-12-05 03:09:08.858695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.225 [2024-12-05 03:09:08.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.226 [2024-12-05 03:09:08.863542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.226 [2024-12-05 03:09:08.863619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.226 [2024-12-05 03:09:08.863637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.226 [2024-12-05 03:09:08.868380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.226 [2024-12-05 03:09:08.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.226 [2024-12-05 03:09:08.868476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.226 [2024-12-05 03:09:08.873298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.226 [2024-12-05 03:09:08.873376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.226 [2024-12-05 03:09:08.873394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.226 [2024-12-05 03:09:08.878072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.226 [2024-12-05 03:09:08.878150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.226 [2024-12-05 03:09:08.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.226 [2024-12-05 03:09:08.882800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.226 [2024-12-05 03:09:08.882877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.226 [2024-12-05 03:09:08.882895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.226 6316.00 IOPS, 789.50 MiB/s 00:24:38.226 Latency(us) 00:24:38.226 [2024-12-05T03:09:09.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.226 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:38.226 nvme0n1 : 2.00 6314.01 789.25 0.00 0.00 2529.93 2129.92 11081.54 00:24:38.226 [2024-12-05T03:09:09.070Z] =================================================================================================================== 00:24:38.226 [2024-12-05T03:09:09.070Z] Total : 6314.01 789.25 0.00 0.00 2529.93 2129.92 11081.54 00:24:38.226 { 00:24:38.226 "results": [ 00:24:38.226 { 00:24:38.226 "job": "nvme0n1", 00:24:38.226 "core_mask": "0x2", 00:24:38.226 "workload": "randread", 00:24:38.226 "status": "finished", 00:24:38.226 "queue_depth": 16, 00:24:38.226 "io_size": 131072, 00:24:38.226 "runtime": 2.003164, 00:24:38.226 "iops": 6314.011234227452, 00:24:38.226 "mibps": 789.2514042784316, 00:24:38.226 "io_failed": 0, 00:24:38.226 "io_timeout": 0, 00:24:38.226 "avg_latency_us": 2529.9341682479444, 00:24:38.226 "min_latency_us": 2129.92, 00:24:38.226 "max_latency_us": 11081.541818181819 00:24:38.226 } 00:24:38.226 ], 00:24:38.226 "core_count": 1 00:24:38.226 } 00:24:38.226 03:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:38.226 03:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:38.226 03:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:38.226 | .driver_specific 00:24:38.226 | .nvme_error 00:24:38.226 | .status_code 00:24:38.226 | .command_transient_transport_error' 00:24:38.226 03:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 408 > 0 )) 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86528 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86528 ']' 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86528 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86528 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.486 killing process with pid 86528 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86528' 00:24:38.486 Received shutdown signal, test time was about 2.000000 seconds 00:24:38.486 00:24:38.486 Latency(us) 00:24:38.486 [2024-12-05T03:09:09.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.486 [2024-12-05T03:09:09.330Z] =================================================================================================================== 00:24:38.486 [2024-12-05T03:09:09.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86528 00:24:38.486 03:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86528 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86595 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86595 /var/tmp/bperf.sock 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86595 ']' 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.422 03:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:39.422 [2024-12-05 03:09:10.087802] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:39.422 [2024-12-05 03:09:10.088022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86595 ] 00:24:39.422 [2024-12-05 03:09:10.252374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.682 [2024-12-05 03:09:10.342709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.682 [2024-12-05 03:09:10.498891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.249 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.249 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:40.249 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:40.249 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.508 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.766 nvme0n1 00:24:40.766 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:40.766 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.766 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.766 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.766 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:40.767 03:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:41.026 Running I/O for 2 seconds... 00:24:41.026 [2024-12-05 03:09:11.679472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:24:41.026 [2024-12-05 03:09:11.681419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.681503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.697478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:24:41.026 [2024-12-05 03:09:11.699476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.714990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:24:41.026 [2024-12-05 03:09:11.716774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.716858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.732828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:24:41.026 [2024-12-05 03:09:11.734619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.734700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.750341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:24:41.026 [2024-12-05 03:09:11.752142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.767743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:24:41.026 [2024-12-05 03:09:11.769719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.785254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:24:41.026 [2024-12-05 03:09:11.787010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.787061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.802826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:24:41.026 [2024-12-05 03:09:11.804577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.804637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.820525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:24:41.026 [2024-12-05 03:09:11.822265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.822349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.838221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:24:41.026 [2024-12-05 03:09:11.839942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.840009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.026 [2024-12-05 03:09:11.856260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:24:41.026 [2024-12-05 03:09:11.857944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.026 [2024-12-05 03:09:11.858010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.875404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:24:41.284 [2024-12-05 03:09:11.877154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.877242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.893588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:24:41.284 [2024-12-05 03:09:11.895369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.895447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.910886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:24:41.284 [2024-12-05 03:09:11.912524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.912598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.928348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:24:41.284 [2024-12-05 03:09:11.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.945394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:24:41.284 [2024-12-05 03:09:11.946848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.946902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.968544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:24:41.284 [2024-12-05 03:09:11.971323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:11.984882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:24:41.284 [2024-12-05 03:09:11.987552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:11.987615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:12.001276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:24:41.284 [2024-12-05 03:09:12.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:12.004051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.284 [2024-12-05 03:09:12.017582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:24:41.284 [2024-12-05 03:09:12.020324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.284 [2024-12-05 03:09:12.020384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.034198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:24:41.285 [2024-12-05 03:09:12.036799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.050455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:24:41.285 [2024-12-05 03:09:12.053176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.053232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.066798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:24:41.285 [2024-12-05 03:09:12.069376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.069439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.083081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:24:41.285 [2024-12-05 03:09:12.085611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.085672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.099479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:24:41.285 [2024-12-05 03:09:12.102067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.102122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.285 [2024-12-05 03:09:12.115827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:24:41.285 [2024-12-05 03:09:12.118263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.285 [2024-12-05 03:09:12.118317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.133468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:24:41.542 [2024-12-05 03:09:12.136055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.136110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.149866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:24:41.542 [2024-12-05 03:09:12.152491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.152555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.166392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:24:41.542 [2024-12-05 03:09:12.168881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.182645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:24:41.542 [2024-12-05 03:09:12.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.185293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.199060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:24:41.542 [2024-12-05 03:09:12.201484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.201539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.215458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:24:41.542 [2024-12-05 03:09:12.217868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.217931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.231861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:24:41.542 [2024-12-05 03:09:12.234157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.234220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.248151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:24:41.542 [2024-12-05 03:09:12.250428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.250483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.264458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:24:41.542 [2024-12-05 03:09:12.266818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.266872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.280679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:24:41.542 [2024-12-05 03:09:12.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.542 [2024-12-05 03:09:12.283035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.542 [2024-12-05 03:09:12.296848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:24:41.542 [2024-12-05 03:09:12.299142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.299209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.543 [2024-12-05 03:09:12.313149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:24:41.543 [2024-12-05 03:09:12.315485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.315541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.543 [2024-12-05 03:09:12.329774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:24:41.543 [2024-12-05 03:09:12.332090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.332145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.543 [2024-12-05 03:09:12.346069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:24:41.543 [2024-12-05 03:09:12.348349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.348407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.543 [2024-12-05 03:09:12.363135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:24:41.543 [2024-12-05 03:09:12.365467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.543 [2024-12-05 03:09:12.380573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:24:41.543 [2024-12-05 03:09:12.383128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.543 [2024-12-05 03:09:12.383195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.398216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:24:41.801 [2024-12-05 03:09:12.400457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.400517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.414770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:24:41.801 [2024-12-05 03:09:12.416932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.416987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.431178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:24:41.801 [2024-12-05 03:09:12.433313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.433367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.447721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:24:41.801 [2024-12-05 03:09:12.449795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.449870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.464345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:24:41.801 [2024-12-05 03:09:12.466424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.466486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.480884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:24:41.801 [2024-12-05 03:09:12.482937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.483014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.497228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:24:41.801 [2024-12-05 03:09:12.499357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.499411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.513672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:24:41.801 [2024-12-05 03:09:12.515771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.515834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.529947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:24:41.801 [2024-12-05 03:09:12.531964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.532025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.546028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:24:41.801 [2024-12-05 03:09:12.548021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.548080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.562416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:24:41.801 [2024-12-05 03:09:12.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.564490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.578741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:24:41.801 [2024-12-05 03:09:12.580706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.580777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.595022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:24:41.801 [2024-12-05 03:09:12.596971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.597037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.611414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:24:41.801 [2024-12-05 03:09:12.613366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.613426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.801 [2024-12-05 03:09:12.627746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:24:41.801 [2024-12-05 03:09:12.629623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.801 [2024-12-05 03:09:12.629678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.645307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:24:42.060 [2024-12-05 03:09:12.647572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.060 14929.00 IOPS, 58.32 MiB/s [2024-12-05T03:09:12.904Z] [2024-12-05 03:09:12.665570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:24:42.060 [2024-12-05 03:09:12.667701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.667755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.683955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:24:42.060 [2024-12-05 03:09:12.685957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.701177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:24:42.060 [2024-12-05 03:09:12.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.703174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.717693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:24:42.060 [2024-12-05 03:09:12.719589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.734101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:24:42.060 [2024-12-05 03:09:12.735889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.735947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.750349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:24:42.060 [2024-12-05 03:09:12.752157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.752227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.766690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:24:42.060 [2024-12-05 03:09:12.768462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.768516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.783042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:24:42.060 [2024-12-05 03:09:12.784782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.799451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:24:42.060 [2024-12-05 03:09:12.801178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.801267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.815778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:24:42.060 [2024-12-05 03:09:12.817448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.817502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.832114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:24:42.060 [2024-12-05 03:09:12.833752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.833817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.848365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:24:42.060 [2024-12-05 03:09:12.850028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.850091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.865383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:24:42.060 [2024-12-05 03:09:12.867041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.867104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.882598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:24:42.060 [2024-12-05 03:09:12.884495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.060 [2024-12-05 03:09:12.884559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.060 [2024-12-05 03:09:12.902001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:24:42.319 [2024-12-05 03:09:12.904071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.904152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:12.921352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:24:42.319 [2024-12-05 03:09:12.923032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.923083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:12.938989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:24:42.319 [2024-12-05 03:09:12.940603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.940660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:12.956398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:24:42.319 [2024-12-05 03:09:12.958022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.958078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:12.973637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:24:42.319 [2024-12-05 03:09:12.975345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:12.990883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:24:42.319 [2024-12-05 03:09:12.992492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:12.992556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.008151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:24:42.319 [2024-12-05 03:09:13.009638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.009699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.032556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:24:42.319 [2024-12-05 03:09:13.035444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.049971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:24:42.319 [2024-12-05 03:09:13.052873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.052936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.067164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:24:42.319 [2024-12-05 03:09:13.070114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.070191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.084585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:24:42.319 [2024-12-05 03:09:13.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.087511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.102172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:24:42.319 [2024-12-05 03:09:13.104925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.104982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.119629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:24:42.319 [2024-12-05 03:09:13.122238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.122292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.135948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:24:42.319 [2024-12-05 03:09:13.138414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.138476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.319 [2024-12-05 03:09:13.152240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:24:42.319 [2024-12-05 03:09:13.154745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.319 [2024-12-05 03:09:13.154829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.169688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:24:42.577 [2024-12-05 03:09:13.172322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.172377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.186256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:24:42.577 [2024-12-05 03:09:13.188852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.188906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.202672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:24:42.577 [2024-12-05 03:09:13.205227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.205282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.219151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:24:42.577 [2024-12-05 03:09:13.221575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.221637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.235519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:24:42.577 [2024-12-05 03:09:13.238013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.238072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.252103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:24:42.577 [2024-12-05 03:09:13.254523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.577 [2024-12-05 03:09:13.254578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.577 [2024-12-05 03:09:13.268531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:24:42.577 [2024-12-05 03:09:13.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.271050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.284788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:24:42.578 [2024-12-05 03:09:13.287144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.287194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.301031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:24:42.578 [2024-12-05 03:09:13.303454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.303513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.317567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:24:42.578 [2024-12-05 03:09:13.320126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.320187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.334133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:24:42.578 [2024-12-05 03:09:13.336449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.336504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.350488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:24:42.578 [2024-12-05 03:09:13.352899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.352954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.366898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:24:42.578 [2024-12-05 03:09:13.369259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.369321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.383363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:24:42.578 [2024-12-05 03:09:13.385735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.385804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.399791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:24:42.578 [2024-12-05 03:09:13.402007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.402062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.578 [2024-12-05 03:09:13.416173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:24:42.578 [2024-12-05 03:09:13.418600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.578 [2024-12-05 03:09:13.418656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.433586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:24:42.836 [2024-12-05 03:09:13.435897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.450082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:24:42.836 [2024-12-05 03:09:13.452360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.452421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.466536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:24:42.836 [2024-12-05 03:09:13.468819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.468879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.482851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:24:42.836 [2024-12-05 03:09:13.485065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.485121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.499288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:24:42.836 [2024-12-05 03:09:13.501459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.501514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.516275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:24:42.836 [2024-12-05 03:09:13.518484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.518540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.533708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:24:42.836 [2024-12-05 03:09:13.535943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.536017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.550360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:24:42.836 [2024-12-05 03:09:13.552512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.552571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.567067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:24:42.836 [2024-12-05 03:09:13.569178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.569233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.583697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:24:42.836 [2024-12-05 03:09:13.585728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.600241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:24:42.836 [2024-12-05 03:09:13.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.602269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.616902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:24:42.836 [2024-12-05 03:09:13.618855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.618939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.633688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:24:42.836 [2024-12-05 03:09:13.635776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.635846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.836 [2024-12-05 03:09:13.650751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:24:42.836 [2024-12-05 03:09:13.652861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.652919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.836 14991.50 IOPS, 58.56 MiB/s [2024-12-05T03:09:13.680Z] [2024-12-05 03:09:13.669697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:24:42.836 [2024-12-05 03:09:13.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.836 [2024-12-05 03:09:13.671957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.836 00:24:42.836 Latency(us) 00:24:42.836 [2024-12-05T03:09:13.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.836 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:42.836 nvme0n1 : 2.01 14971.17 58.48 0.00 0.00 8540.78 4796.04 32887.16 00:24:42.836 [2024-12-05T03:09:13.680Z] =================================================================================================================== 00:24:42.836 [2024-12-05T03:09:13.680Z] Total : 14971.17 58.48 0.00 0.00 8540.78 4796.04 32887.16 00:24:42.836 { 00:24:42.836 "results": [ 00:24:42.836 { 00:24:42.836 "job": "nvme0n1", 00:24:42.836 "core_mask": "0x2", 00:24:42.836 "workload": "randwrite", 00:24:42.836 "status": "finished", 00:24:42.836 "queue_depth": 128, 00:24:42.836 "io_size": 4096, 00:24:42.836 "runtime": 2.011265, 00:24:42.836 "iops": 14971.174857614486, 00:24:42.836 "mibps": 58.481151787556584, 00:24:42.836 "io_failed": 0, 00:24:42.836 "io_timeout": 0, 00:24:42.836 "avg_latency_us": 8540.779493087695, 00:24:42.836 "min_latency_us": 4796.043636363636, 00:24:42.836 "max_latency_us": 32887.156363636364 00:24:42.836 } 00:24:42.836 ], 00:24:42.836 "core_count": 1 00:24:42.836 } 00:24:43.094 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:43.094 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:43.094 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:43.094 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:43.094 | .driver_specific 00:24:43.094 | .nvme_error 00:24:43.094 | .status_code 00:24:43.094 | .command_transient_transport_error' 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86595 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86595 ']' 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86595 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:43.352 03:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86595 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.352 killing process with pid 86595 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86595' 00:24:43.352 Received shutdown signal, test time was about 2.000000 seconds 00:24:43.352 00:24:43.352 Latency(us) 00:24:43.352 [2024-12-05T03:09:14.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.352 [2024-12-05T03:09:14.196Z] =================================================================================================================== 00:24:43.352 [2024-12-05T03:09:14.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86595 00:24:43.352 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86595 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86664 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86664 /var/tmp/bperf.sock 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86664 ']' 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.287 03:09:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:44.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.287 Zero copy mechanism will not be used. 00:24:44.287 [2024-12-05 03:09:14.980862] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:44.287 [2024-12-05 03:09:14.981031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86664 ] 00:24:44.545 [2024-12-05 03:09:15.158353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.545 [2024-12-05 03:09:15.239338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.803 [2024-12-05 03:09:15.391249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.061 03:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.061 03:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:45.061 03:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.061 03:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:45.319 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:45.578 nvme0n1 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:45.839 03:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:45.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:45.839 Zero copy mechanism will not be used. 00:24:45.839 Running I/O for 2 seconds... 00:24:45.839 [2024-12-05 03:09:16.527883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.528015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.528055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.533829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.533964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.539649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.539784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.545219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.545319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.545357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.550947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.551039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.551071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.556536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.556645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.556675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.562272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.562386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.562424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.567966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.568070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.568099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.573536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.573636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.573665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.579189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.579322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.579358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.584859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.584951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.584988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.590487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.590658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.590690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.596789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.596908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.596939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.602859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.602998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.603038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.609345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.609452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.609485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.615951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.616114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.616189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.622411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.622505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.622544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.628877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.628986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.629019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.635331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.635434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.635464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.641524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.641624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.641662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.647672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.647813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.647867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.653467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.653606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.659503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.659617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.839 [2024-12-05 03:09:16.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.839 [2024-12-05 03:09:16.665666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.839 [2024-12-05 03:09:16.665797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.840 [2024-12-05 03:09:16.665852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.840 [2024-12-05 03:09:16.671735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.840 [2024-12-05 03:09:16.671885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.840 [2024-12-05 03:09:16.671915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.840 [2024-12-05 03:09:16.677762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:45.840 [2024-12-05 03:09:16.677894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.840 [2024-12-05 03:09:16.677926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.684296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.684406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.684477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.690493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.690606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.690636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.696444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.696549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.696579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.702302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.702402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.702440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.708538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.708656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.708685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.714495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.714603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.714634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.720482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.720575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.720614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.726663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.726818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.732532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.732678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.738986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.739110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.745495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.745666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.752532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.752677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.752720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.759576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.759672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.766262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.766365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.766406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.772929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.773050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.773083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.779345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.779446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.779481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.785593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.100 [2024-12-05 03:09:16.785686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.100 [2024-12-05 03:09:16.785724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.100 [2024-12-05 03:09:16.792168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.792295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.792325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.798009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.798112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.798141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.803854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.803956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.804009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.809634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.809736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.809766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.815880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.815993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.821864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.821970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.822009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.827936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.828048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.828087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.834288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.834393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.834423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.840475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.840577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.840617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.846348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.846441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.846480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.852279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.852391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.857972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.858071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.858100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.863766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.863878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.869443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.869544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.875636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.875757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.875803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.881483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.881610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.881647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.887584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.887685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.887724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.893489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.893593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.893622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.899544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.899675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.899709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.905226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.905319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.905355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.910993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.911118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.911150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.916805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.916910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.916940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.922490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.922593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.922630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.928300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.928417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.934158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.934287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.934317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.101 [2024-12-05 03:09:16.940158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.101 [2024-12-05 03:09:16.940253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.101 [2024-12-05 03:09:16.940290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.360 [2024-12-05 03:09:16.946297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.946392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.946431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.952942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.953084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.953114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.958764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.958884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.958939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.964531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.964630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.964666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.970299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.970399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.970428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.976013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.976130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.976159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.981749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.981875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.981912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.987459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.987552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.987592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.993238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.993354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.993384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:16.998854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:16.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:16.999031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.004662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.004798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.004838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.010296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.010417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.010445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.016036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.016147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.021657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.021748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.021789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.027426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.027527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.027566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.033169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.033269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.033298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.038834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.038971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.039001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.044469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.044583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.044620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.050368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.050468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.050505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.056179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.056280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.056308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.061823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.061931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.061968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.067457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.067548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.067585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.073182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.073290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.073320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.078794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.078942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.078987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.084464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.084567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.084606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.361 [2024-12-05 03:09:17.090261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.361 [2024-12-05 03:09:17.090368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.361 [2024-12-05 03:09:17.090405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.095974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.096109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.096138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.101618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.101710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.101746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.107395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.107531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.113048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.113182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.113211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.118696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.118819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.118850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.124447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.124573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.130362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.130483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.130523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.136153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.136256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.136285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.141916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.142027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.142064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.147582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.147674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.147710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.153334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.153442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.153471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.159014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.159119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.159149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.164641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.164751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.164838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.170302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.170394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.176032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.176143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.176182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.181568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.181663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.181698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.187356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.187470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.193118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.193235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.193264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.362 [2024-12-05 03:09:17.198734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.362 [2024-12-05 03:09:17.198882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.362 [2024-12-05 03:09:17.198936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.205109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.205279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.211034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.211172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.211206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.216883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.217005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.217033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.222509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.222610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.222647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.228393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.228484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.228521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.234128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.234233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.239869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.239971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.240000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.245507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.245630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.245667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.251282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.251433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.251461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.257007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.257118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.257148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.262618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.262712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.262749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.268358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.268458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.268495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.274104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.274207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.274236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.279753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.279874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.279903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.285344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.285437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.291043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.291156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.291195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.296610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.296742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.302494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.302596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.302633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.308200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.308293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.308332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.313943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.314056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.314085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.319582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.319683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.319712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.325447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.325549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.325587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.331339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.331447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.331483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.337006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.622 [2024-12-05 03:09:17.337116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.622 [2024-12-05 03:09:17.337146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.622 [2024-12-05 03:09:17.342689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.342806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.342844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.348379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.348494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.354001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.354102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.354131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.359726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.359857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.359888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.365408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.365500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.365537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.371130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.371261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.371302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.376836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.376940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.376969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.382463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.382567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.382601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.388233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.388325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.393885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.393993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.394023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.399564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.399669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.399698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.405475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.411121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.411216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.411269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.416970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.417083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.417113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.422552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.422643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.428460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.428563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.428600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.434178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.434298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.434327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.439861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.439961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.439991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.445480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.445572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.445609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.451190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.451320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.451359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.456919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.457021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.457050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.623 [2024-12-05 03:09:17.463064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.623 [2024-12-05 03:09:17.463238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.623 [2024-12-05 03:09:17.463307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.883 [2024-12-05 03:09:17.469610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.883 [2024-12-05 03:09:17.469719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.883 [2024-12-05 03:09:17.469789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.883 [2024-12-05 03:09:17.475505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.883 [2024-12-05 03:09:17.475611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.883 [2024-12-05 03:09:17.475640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.883 [2024-12-05 03:09:17.481148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.883 [2024-12-05 03:09:17.481263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.883 [2024-12-05 03:09:17.481291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.883 [2024-12-05 03:09:17.486865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.487034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.487075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.492608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.492708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.498305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.498409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.498438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.504042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.504135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.504171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.509659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.509767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.509821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.515466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.515587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.515616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.521262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.521371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.521400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 5267.00 IOPS, 658.38 MiB/s [2024-12-05T03:09:17.728Z] [2024-12-05 03:09:17.528081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.528165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.528196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.533993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.534083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.534113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.539650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.539771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.545297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.545396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.551005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.551101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.556747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.556860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.556890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.562386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.562481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.562511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.568220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.568319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.573949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.574045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.574075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.579650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.579751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.579797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.585368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.585518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.591348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.591456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.591486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.597063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.597154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.597184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.602650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.602755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.602800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.608446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.608536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.608565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.614152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.614267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.614311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.619861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.619957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.619986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.625486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.625584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.631274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.884 [2024-12-05 03:09:17.631381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.884 [2024-12-05 03:09:17.631410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.884 [2024-12-05 03:09:17.636995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.637096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.637125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.642576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.648431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.648529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.654214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.654306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.654335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.660081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.660190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.660219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.665646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.665738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.665812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.671432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.671531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.671560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.677189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.677301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.677330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.682785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.682933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.682979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.688453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.688555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.688583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.694195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.694325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.694354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.699876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.699990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.700019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.705618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.705717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.705747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.711333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.711424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.711454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.717068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.717166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.717195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.885 [2024-12-05 03:09:17.722832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:46.885 [2024-12-05 03:09:17.722966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.885 [2024-12-05 03:09:17.722996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.145 [2024-12-05 03:09:17.729122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.145 [2024-12-05 03:09:17.729230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.145 [2024-12-05 03:09:17.729260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.145 [2024-12-05 03:09:17.735161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.145 [2024-12-05 03:09:17.735310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.145 [2024-12-05 03:09:17.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.145 [2024-12-05 03:09:17.740879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.145 [2024-12-05 03:09:17.740981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.145 [2024-12-05 03:09:17.741010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.145 [2024-12-05 03:09:17.746541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.145 [2024-12-05 03:09:17.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.145 [2024-12-05 03:09:17.746684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.752369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.752469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.758377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.758473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.758503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.764753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.764884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.764915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.771376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.771489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.771519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.778348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.778442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.778476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.784809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.784949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.784983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.791236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.791381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.791411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.797537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.797629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.797659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.803847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.803967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.809852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.809981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.815860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.815961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.815990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.821587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.821681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.821710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.827469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.827561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.827591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.833244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.833336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.833365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.839468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.839626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.845849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.845949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.845981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.852169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.852290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.852353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.858823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.858967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.859001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.865636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.865736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.865782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.872046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.872180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.872210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.878069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.878187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.878217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.884201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.884296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.884328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.890386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.890489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.890520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.896388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.896484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.896514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.902263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.902364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.902394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.146 [2024-12-05 03:09:17.908303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.146 [2024-12-05 03:09:17.908413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.146 [2024-12-05 03:09:17.908445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.914423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.914525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.914555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.920557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.920651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.926670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.926788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.926849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.932882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.932979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.933010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.938783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.938884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.938940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.944660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.944754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.950646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.950765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.956772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.956886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.956916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.962601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.962700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.962729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.968606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.968719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.968749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.974601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.974718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.974748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.980540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.980634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.980664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.147 [2024-12-05 03:09:17.986948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.147 [2024-12-05 03:09:17.987059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.147 [2024-12-05 03:09:17.987093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:17.993526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:17.993660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:17.993689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:17.999619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:17.999719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:17.999750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.005467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.005561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.005591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.011506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.011618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.011648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.017764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.017871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.017900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.023652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.023751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.023781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.029552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.029646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.029675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.035723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.035864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.035909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.041511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.041604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.041633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.047423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.047524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.053321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.053416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.053445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.059426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.059527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.059557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.065379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.065504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.065534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.071404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.071520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.071549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.077611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.077708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.077740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.083548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.083655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.083683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.089450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.089541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.089571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.095368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.095498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.101107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.101228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.106712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.106826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.106857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.112370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.112491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.118269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.118372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.118402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.123863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.123955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.123984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.129568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.129671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.129701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.135273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.407 [2024-12-05 03:09:18.135381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.407 [2024-12-05 03:09:18.135410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.407 [2024-12-05 03:09:18.140904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.141027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.141056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.146542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.146662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.152306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.152417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.158015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.158130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.158173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.163707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.163844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.163874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.169324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.169416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.169445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.175040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.175148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.175178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.180729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.180833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.180862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.186299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.186397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.186426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.192014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.192107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.192136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.197605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.197705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.197734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.203413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.203504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.203533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.209110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.209218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.209247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.214710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.214817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.214847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.220350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.220448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.220477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.226073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.226220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.231708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.231812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.231857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.237338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.237431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.408 [2024-12-05 03:09:18.243131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.408 [2024-12-05 03:09:18.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.408 [2024-12-05 03:09:18.243316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.249415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.249549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.249579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.255630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.255721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.255750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.261319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.261439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.267098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.267223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.267298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.272890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.273002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.273031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.278529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.278628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.278658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.284377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.668 [2024-12-05 03:09:18.284470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.668 [2024-12-05 03:09:18.284500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.668 [2024-12-05 03:09:18.290064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.290168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.290198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.295881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.295985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.296014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.301622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.301753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.301813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.307312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.307403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.307432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.312979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.313103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.313133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.318575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.318675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.318704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.324457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.324561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.324590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.330228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.330320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.330349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.336100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.336208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.336238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.341811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.341903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.341933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.347506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.347610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.347639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.353199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.353292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.358905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.359084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.359114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.364545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.364638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.364667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.370275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.370384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.370413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.375888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.375983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.381470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.381570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.381600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.387154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.387267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.387310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.392801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.392922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.392951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.398492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.398584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.398614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.404315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.404415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.404445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.409930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.410046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.415620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.415765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.421301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.421394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.421423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.427008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.427113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.427144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.432597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.432710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.432739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.438304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.669 [2024-12-05 03:09:18.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.669 [2024-12-05 03:09:18.438433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.669 [2024-12-05 03:09:18.444001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.444095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.444124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.449626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.449729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.449758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.455376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.455468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.455497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.461040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.461141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.461185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.466650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.466741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.466770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.472470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.472578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.472608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.478149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.478240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.478269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.483891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.484018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.489481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.489594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.489622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.495332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.495424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.495453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.501108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.501217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.501246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.670 [2024-12-05 03:09:18.507022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.670 [2024-12-05 03:09:18.507143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.670 [2024-12-05 03:09:18.507176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.928 [2024-12-05 03:09:18.513178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.928 [2024-12-05 03:09:18.513285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.928 [2024-12-05 03:09:18.513314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.928 [2024-12-05 03:09:18.519211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.928 [2024-12-05 03:09:18.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.928 [2024-12-05 03:09:18.519375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.928 [2024-12-05 03:09:18.524984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:47.929 [2024-12-05 03:09:18.525079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.929 [2024-12-05 03:09:18.525108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.929 5273.00 IOPS, 659.12 MiB/s 00:24:47.929 Latency(us) 00:24:47.929 [2024-12-05T03:09:18.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.929 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:47.929 nvme0n1 : 2.00 5272.02 659.00 0.00 0.00 3026.87 2115.03 7089.80 00:24:47.929 [2024-12-05T03:09:18.773Z] =================================================================================================================== 00:24:47.929 [2024-12-05T03:09:18.773Z] Total : 5272.02 659.00 0.00 0.00 3026.87 2115.03 7089.80 00:24:47.929 { 00:24:47.929 "results": [ 00:24:47.929 { 00:24:47.929 "job": "nvme0n1", 00:24:47.929 "core_mask": "0x2", 00:24:47.929 "workload": "randwrite", 00:24:47.929 "status": "finished", 00:24:47.929 "queue_depth": 16, 00:24:47.929 "io_size": 131072, 00:24:47.929 "runtime": 2.004354, 00:24:47.929 "iops": 5272.022806350575, 00:24:47.929 "mibps": 659.0028507938218, 00:24:47.929 "io_failed": 0, 00:24:47.929 "io_timeout": 0, 00:24:47.929 "avg_latency_us": 3026.8677794506057, 00:24:47.929 "min_latency_us": 2115.0254545454545, 00:24:47.929 "max_latency_us": 7089.8036363636365 00:24:47.929 } 00:24:47.929 ], 00:24:47.929 "core_count": 1 00:24:47.929 } 00:24:47.929 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:47.929 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:47.929 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:47.929 | .driver_specific 00:24:47.929 | .nvme_error 00:24:47.929 | .status_code 00:24:47.929 | .command_transient_transport_error' 00:24:47.929 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86664 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86664 ']' 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86664 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86664 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:48.187 killing process with pid 86664 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86664' 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86664 00:24:48.187 Received shutdown signal, test time was about 2.000000 seconds 00:24:48.187 00:24:48.187 Latency(us) 00:24:48.187 [2024-12-05T03:09:19.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.187 [2024-12-05T03:09:19.031Z] =================================================================================================================== 00:24:48.187 [2024-12-05T03:09:19.031Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.187 03:09:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86664 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86429 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86429 ']' 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86429 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86429 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.122 killing process with pid 86429 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86429' 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86429 00:24:49.122 03:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86429 00:24:49.688 00:24:49.688 real 0m21.603s 00:24:49.688 user 0m41.362s 00:24:49.688 sys 0m4.596s 00:24:49.688 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.688 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.688 ************************************ 00:24:49.688 END TEST nvmf_digest_error 00:24:49.688 ************************************ 00:24:49.946 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.947 rmmod nvme_tcp 00:24:49.947 rmmod nvme_fabrics 00:24:49.947 rmmod nvme_keyring 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 86429 ']' 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 86429 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 86429 ']' 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 86429 00:24:49.947 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86429) - No such process 00:24:49.947 Process with pid 86429 is not found 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 86429 is not found' 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:49.947 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:24:50.205 00:24:50.205 real 0m45.421s 00:24:50.205 user 1m25.539s 00:24:50.205 sys 0m9.620s 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:50.205 ************************************ 00:24:50.205 END TEST nvmf_digest 00:24:50.205 ************************************ 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.205 ************************************ 00:24:50.205 START TEST nvmf_host_multipath 00:24:50.205 ************************************ 00:24:50.205 03:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:50.205 * Looking for test storage... 00:24:50.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:50.205 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.205 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.205 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.465 --rc genhtml_branch_coverage=1 00:24:50.465 --rc genhtml_function_coverage=1 00:24:50.465 --rc genhtml_legend=1 00:24:50.465 --rc geninfo_all_blocks=1 00:24:50.465 --rc geninfo_unexecuted_blocks=1 00:24:50.465 00:24:50.465 ' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.465 --rc genhtml_branch_coverage=1 00:24:50.465 --rc genhtml_function_coverage=1 00:24:50.465 --rc genhtml_legend=1 00:24:50.465 --rc geninfo_all_blocks=1 00:24:50.465 --rc geninfo_unexecuted_blocks=1 00:24:50.465 00:24:50.465 ' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.465 --rc genhtml_branch_coverage=1 00:24:50.465 --rc genhtml_function_coverage=1 00:24:50.465 --rc genhtml_legend=1 00:24:50.465 --rc geninfo_all_blocks=1 00:24:50.465 --rc geninfo_unexecuted_blocks=1 00:24:50.465 00:24:50.465 ' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.465 --rc genhtml_branch_coverage=1 00:24:50.465 --rc genhtml_function_coverage=1 00:24:50.465 --rc genhtml_legend=1 00:24:50.465 --rc geninfo_all_blocks=1 00:24:50.465 --rc geninfo_unexecuted_blocks=1 00:24:50.465 00:24:50.465 ' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.465 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.466 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:50.466 Cannot find device "nvmf_init_br" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:50.466 Cannot find device "nvmf_init_br2" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:50.466 Cannot find device "nvmf_tgt_br" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.466 Cannot find device "nvmf_tgt_br2" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:50.466 Cannot find device "nvmf_init_br" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:50.466 Cannot find device "nvmf_init_br2" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:50.466 Cannot find device "nvmf_tgt_br" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:50.466 Cannot find device "nvmf_tgt_br2" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:50.466 Cannot find device "nvmf_br" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:50.466 Cannot find device "nvmf_init_if" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:50.466 Cannot find device "nvmf_init_if2" 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.466 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:50.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:24:50.726 00:24:50.726 --- 10.0.0.3 ping statistics --- 00:24:50.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.726 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:50.726 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:50.726 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:24:50.726 00:24:50.726 --- 10.0.0.4 ping statistics --- 00:24:50.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.726 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:50.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:50.726 00:24:50.726 --- 10.0.0.1 ping statistics --- 00:24:50.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.726 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:50.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:50.726 00:24:50.726 --- 10.0.0.2 ping statistics --- 00:24:50.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.726 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=86996 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 86996 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 86996 ']' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.726 03:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.985 [2024-12-05 03:09:21.661090] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:50.985 [2024-12-05 03:09:21.661253] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.244 [2024-12-05 03:09:21.843934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.244 [2024-12-05 03:09:21.932617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.244 [2024-12-05 03:09:21.932915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.244 [2024-12-05 03:09:21.933038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.244 [2024-12-05 03:09:21.933153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.244 [2024-12-05 03:09:21.933248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.244 [2024-12-05 03:09:21.934905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.244 [2024-12-05 03:09:21.934958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.244 [2024-12-05 03:09:22.080726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:51.812 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.812 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:51.812 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.812 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.812 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:52.071 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.071 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86996 00:24:52.071 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.329 [2024-12-05 03:09:22.941461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.329 03:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:52.586 Malloc0 00:24:52.586 03:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:52.844 03:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.102 03:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:53.361 [2024-12-05 03:09:23.952443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:53.361 03:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:53.361 [2024-12-05 03:09:24.160492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87054 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87054 /var/tmp/bdevperf.sock 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 87054 ']' 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.361 03:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:54.736 03:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.736 03:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:54.736 03:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:54.994 03:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.252 Nvme0n1 00:24:55.252 03:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.510 Nvme0n1 00:24:55.510 03:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:55.510 03:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:56.446 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:56.446 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:56.705 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:56.963 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:56.963 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87097 00:24:56.963 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:56.963 03:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:03.545 03:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:03.545 03:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.545 Attaching 4 probes... 00:25:03.545 @path[10.0.0.3, 4421]: 15825 00:25:03.545 @path[10.0.0.3, 4421]: 15843 00:25:03.545 @path[10.0.0.3, 4421]: 15867 00:25:03.545 @path[10.0.0.3, 4421]: 15961 00:25:03.545 @path[10.0.0.3, 4421]: 16080 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87097 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:03.545 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:03.804 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:03.804 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87216 00:25:03.804 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:03.804 03:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.395 Attaching 4 probes... 00:25:10.395 @path[10.0.0.3, 4420]: 15877 00:25:10.395 @path[10.0.0.3, 4420]: 16432 00:25:10.395 @path[10.0.0.3, 4420]: 16537 00:25:10.395 @path[10.0.0.3, 4420]: 16422 00:25:10.395 @path[10.0.0.3, 4420]: 16184 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87216 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:10.395 03:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:10.395 03:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:10.654 03:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:10.654 03:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87329 00:25:10.654 03:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:10.655 03:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:17.269 Attaching 4 probes... 00:25:17.269 @path[10.0.0.3, 4421]: 12519 00:25:17.269 @path[10.0.0.3, 4421]: 15851 00:25:17.269 @path[10.0.0.3, 4421]: 15750 00:25:17.269 @path[10.0.0.3, 4421]: 15805 00:25:17.269 @path[10.0.0.3, 4421]: 15982 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87329 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:17.269 03:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:17.529 03:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:17.529 03:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87437 00:25:17.529 03:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:17.529 03:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.095 Attaching 4 probes... 00:25:24.095 00:25:24.095 00:25:24.095 00:25:24.095 00:25:24.095 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87437 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:24.095 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:24.354 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:24.354 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:24.354 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87556 00:25:24.354 03:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:30.943 03:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:30.943 03:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.943 Attaching 4 probes... 00:25:30.943 @path[10.0.0.3, 4421]: 15333 00:25:30.943 @path[10.0.0.3, 4421]: 15664 00:25:30.943 @path[10.0.0.3, 4421]: 15759 00:25:30.943 @path[10.0.0.3, 4421]: 15760 00:25:30.943 @path[10.0.0.3, 4421]: 15558 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:30.943 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87556 00:25:30.944 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.944 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:30.944 03:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:31.880 03:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:31.880 03:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87674 00:25:31.880 03:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:31.880 03:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:38.453 Attaching 4 probes... 00:25:38.453 @path[10.0.0.3, 4420]: 15289 00:25:38.453 @path[10.0.0.3, 4420]: 15742 00:25:38.453 @path[10.0.0.3, 4420]: 15785 00:25:38.453 @path[10.0.0.3, 4420]: 15917 00:25:38.453 @path[10.0.0.3, 4420]: 15824 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87674 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:38.453 03:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:38.453 [2024-12-05 03:10:09.048318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:38.453 03:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:38.712 03:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:45.278 03:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:45.278 03:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87849 00:25:45.278 03:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86996 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:45.278 03:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:50.550 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:50.550 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:50.808 Attaching 4 probes... 00:25:50.808 @path[10.0.0.3, 4421]: 15172 00:25:50.808 @path[10.0.0.3, 4421]: 15420 00:25:50.808 @path[10.0.0.3, 4421]: 15436 00:25:50.808 @path[10.0.0.3, 4421]: 15517 00:25:50.808 @path[10.0.0.3, 4421]: 15663 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87849 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87054 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 87054 ']' 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 87054 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87054 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:50.808 killing process with pid 87054 00:25:50.808 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87054' 00:25:50.809 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 87054 00:25:50.809 03:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 87054 00:25:51.067 { 00:25:51.067 "results": [ 00:25:51.067 { 00:25:51.067 "job": "Nvme0n1", 00:25:51.067 "core_mask": "0x4", 00:25:51.067 "workload": "verify", 00:25:51.067 "status": "terminated", 00:25:51.067 "verify_range": { 00:25:51.067 "start": 0, 00:25:51.067 "length": 16384 00:25:51.067 }, 00:25:51.067 "queue_depth": 128, 00:25:51.067 "io_size": 4096, 00:25:51.067 "runtime": 55.297859, 00:25:51.067 "iops": 6740.67688588088, 00:25:51.067 "mibps": 26.330769085472188, 00:25:51.067 "io_failed": 0, 00:25:51.067 "io_timeout": 0, 00:25:51.067 "avg_latency_us": 18964.278696169327, 00:25:51.067 "min_latency_us": 1482.0072727272727, 00:25:51.067 "max_latency_us": 7046430.72 00:25:51.067 } 00:25:51.067 ], 00:25:51.067 "core_count": 1 00:25:51.067 } 00:25:52.013 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87054 00:25:52.013 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:52.013 [2024-12-05 03:09:24.263581] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:52.013 [2024-12-05 03:09:24.263731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87054 ] 00:25:52.013 [2024-12-05 03:09:24.434060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.013 [2024-12-05 03:09:24.534065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.013 [2024-12-05 03:09:24.688276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:52.013 Running I/O for 90 seconds... 00:25:52.013 8290.00 IOPS, 32.38 MiB/s [2024-12-05T03:10:22.857Z] 8243.50 IOPS, 32.20 MiB/s [2024-12-05T03:10:22.857Z] 8186.33 IOPS, 31.98 MiB/s [2024-12-05T03:10:22.857Z] 8115.75 IOPS, 31.70 MiB/s [2024-12-05T03:10:22.857Z] 8084.60 IOPS, 31.58 MiB/s [2024-12-05T03:10:22.857Z] 8066.50 IOPS, 31.51 MiB/s [2024-12-05T03:10:22.857Z] 8061.57 IOPS, 31.49 MiB/s [2024-12-05T03:10:22.857Z] 8042.88 IOPS, 31.42 MiB/s [2024-12-05T03:10:22.857Z] [2024-12-05 03:09:34.551173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.013 [2024-12-05 03:09:34.551932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.013 [2024-12-05 03:09:34.551958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.551977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.552023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.552068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.552947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.552980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.014 [2024-12-05 03:09:34.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.014 [2024-12-05 03:09:34.553719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-05 03:09:34.553739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.553781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.553804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.553831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.553852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.553899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.553925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.553944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.553971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.553990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.554534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.554908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.554974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.015 [2024-12-05 03:09:34.555427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.015 [2024-12-05 03:09:34.555733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.015 [2024-12-05 03:09:34.555760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.555806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.555826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.555896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.555924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.555954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.555975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.556970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.556991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.557018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.557039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.558683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.016 [2024-12-05 03:09:34.558723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.558790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.558853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.558902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.558970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.016 [2024-12-05 03:09:34.559554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.016 [2024-12-05 03:09:34.559581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:34.559600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:34.559627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:34.559647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:34.559679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:34.559700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.017 8010.67 IOPS, 31.29 MiB/s [2024-12-05T03:10:22.861Z] 8019.60 IOPS, 31.33 MiB/s [2024-12-05T03:10:22.861Z] 8037.09 IOPS, 31.39 MiB/s [2024-12-05T03:10:22.861Z] 8052.00 IOPS, 31.45 MiB/s [2024-12-05T03:10:22.861Z] 8066.46 IOPS, 31.51 MiB/s [2024-12-05T03:10:22.861Z] 8070.29 IOPS, 31.52 MiB/s [2024-12-05T03:10:22.861Z] [2024-12-05 03:09:41.088213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.088693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.088973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.088999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.089017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.089061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.017 [2024-12-05 03:09:41.089863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.089909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-12-05 03:09:41.089955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.017 [2024-12-05 03:09:41.089981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.090630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.090699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.090745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.090824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.090871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.018 [2024-12-05 03:09:41.091481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.018 [2024-12-05 03:09:41.091506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-12-05 03:09:41.091524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.091880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.091926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.091971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.091996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.019 [2024-12-05 03:09:41.092825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.092869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.092913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.092957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.092983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.093002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.093027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.093045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.093071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.093089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.093114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.093133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.093158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-12-05 03:09:41.093176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.019 [2024-12-05 03:09:41.093201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.093493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.093513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.094959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.094987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.095046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.095102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.095158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:41.095228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:41.095675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:41.095694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.020 7915.73 IOPS, 30.92 MiB/s [2024-12-05T03:10:22.864Z] 7549.06 IOPS, 29.49 MiB/s [2024-12-05T03:10:22.864Z] 7573.82 IOPS, 29.59 MiB/s [2024-12-05T03:10:22.864Z] 7599.17 IOPS, 29.68 MiB/s [2024-12-05T03:10:22.864Z] 7613.21 IOPS, 29.74 MiB/s [2024-12-05T03:10:22.864Z] 7625.95 IOPS, 29.79 MiB/s [2024-12-05T03:10:22.864Z] 7639.76 IOPS, 29.84 MiB/s [2024-12-05T03:10:22.864Z] [2024-12-05 03:09:48.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.171937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.020 [2024-12-05 03:09:48.172330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.020 [2024-12-05 03:09:48.172374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.020 [2024-12-05 03:09:48.172400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.172979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.172998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.173055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.021 [2024-12-05 03:09:48.173881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.173928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.173954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.173973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-12-05 03:09:48.174282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.021 [2024-12-05 03:09:48.174309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.174646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.174698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.174743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.174822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.174870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.174942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.174990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.175060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.022 [2024-12-05 03:09:48.175110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.175956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.175976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.176003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.176023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.022 [2024-12-05 03:09:48.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-12-05 03:09:48.176071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.176393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.176959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.176988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.177748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.177827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.177890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.177973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.177994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.178023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.178044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.179027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.179067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.179117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.179142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.179191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-12-05 03:09:48.179263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.023 [2024-12-05 03:09:48.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.023 [2024-12-05 03:09:48.179353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:09:48.179738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:09:48.179778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.024 7576.14 IOPS, 29.59 MiB/s [2024-12-05T03:10:22.868Z] 7246.74 IOPS, 28.31 MiB/s [2024-12-05T03:10:22.868Z] 6944.79 IOPS, 27.13 MiB/s [2024-12-05T03:10:22.868Z] 6667.00 IOPS, 26.04 MiB/s [2024-12-05T03:10:22.868Z] 6410.58 IOPS, 25.04 MiB/s [2024-12-05T03:10:22.868Z] 6173.15 IOPS, 24.11 MiB/s [2024-12-05T03:10:22.868Z] 5952.68 IOPS, 23.25 MiB/s [2024-12-05T03:10:22.868Z] 5796.76 IOPS, 22.64 MiB/s [2024-12-05T03:10:22.868Z] 5861.40 IOPS, 22.90 MiB/s [2024-12-05T03:10:22.868Z] 5924.19 IOPS, 23.14 MiB/s [2024-12-05T03:10:22.868Z] 5985.81 IOPS, 23.38 MiB/s [2024-12-05T03:10:22.868Z] 6042.97 IOPS, 23.61 MiB/s [2024-12-05T03:10:22.868Z] 6094.18 IOPS, 23.81 MiB/s [2024-12-05T03:10:22.868Z] 6140.17 IOPS, 23.99 MiB/s [2024-12-05T03:10:22.868Z] [2024-12-05 03:10:01.458539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.458615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.458703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.458734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.458779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.458835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.458868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.458888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.458924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.458962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-12-05 03:10:01.459887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.459973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.459995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.460013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.460032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-12-05 03:10:01.460079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.024 [2024-12-05 03:10:01.460097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.460521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.460968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.460986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-12-05 03:10:01.461449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.461483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.461526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.025 [2024-12-05 03:10:01.461561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-12-05 03:10:01.461579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.461891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.461926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.461968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.026 [2024-12-05 03:10:01.462602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.462998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.463018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-12-05 03:10:01.463036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-12-05 03:10:01.463073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-12-05 03:10:01.463091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-12-05 03:10:01.463133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-12-05 03:10:01.463171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-12-05 03:10:01.463208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-12-05 03:10:01.463245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:25:52.027 [2024-12-05 03:10:01.463314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21832 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.463942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.463955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.463969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.463985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22376 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22384 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22392 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.464282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.027 [2024-12-05 03:10:01.464294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.027 [2024-12-05 03:10:01.464307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22408 len:8 PRP1 0x0 PRP2 0x0 00:25:52.027 [2024-12-05 03:10:01.464323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-12-05 03:10:01.465831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.028 [2024-12-05 03:10:01.465939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-12-05 03:10:01.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-12-05 03:10:01.466034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:52.028 [2024-12-05 03:10:01.466486] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.028 [2024-12-05 03:10:01.466526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:25:52.028 [2024-12-05 03:10:01.466549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:25:52.028 [2024-12-05 03:10:01.466624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:52.028 [2024-12-05 03:10:01.466686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.028 [2024-12-05 03:10:01.466711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.028 [2024-12-05 03:10:01.466729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.028 [2024-12-05 03:10:01.466770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.028 [2024-12-05 03:10:01.466793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.028 6186.53 IOPS, 24.17 MiB/s [2024-12-05T03:10:22.872Z] 6223.22 IOPS, 24.31 MiB/s [2024-12-05T03:10:22.872Z] 6268.50 IOPS, 24.49 MiB/s [2024-12-05T03:10:22.872Z] 6310.03 IOPS, 24.65 MiB/s [2024-12-05T03:10:22.872Z] 6348.68 IOPS, 24.80 MiB/s [2024-12-05T03:10:22.872Z] 6387.20 IOPS, 24.95 MiB/s [2024-12-05T03:10:22.872Z] 6424.26 IOPS, 25.09 MiB/s [2024-12-05T03:10:22.872Z] 6454.21 IOPS, 25.21 MiB/s [2024-12-05T03:10:22.872Z] 6485.34 IOPS, 25.33 MiB/s [2024-12-05T03:10:22.872Z] 6516.51 IOPS, 25.46 MiB/s [2024-12-05T03:10:22.872Z] [2024-12-05 03:10:11.537625] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:52.028 6542.50 IOPS, 25.56 MiB/s [2024-12-05T03:10:22.872Z] 6569.83 IOPS, 25.66 MiB/s [2024-12-05T03:10:22.872Z] 6596.19 IOPS, 25.77 MiB/s [2024-12-05T03:10:22.872Z] 6619.61 IOPS, 25.86 MiB/s [2024-12-05T03:10:22.872Z] 6636.66 IOPS, 25.92 MiB/s [2024-12-05T03:10:22.872Z] 6657.43 IOPS, 26.01 MiB/s [2024-12-05T03:10:22.872Z] 6679.40 IOPS, 26.09 MiB/s [2024-12-05T03:10:22.872Z] 6700.55 IOPS, 26.17 MiB/s [2024-12-05T03:10:22.872Z] 6718.98 IOPS, 26.25 MiB/s [2024-12-05T03:10:22.872Z] 6737.95 IOPS, 26.32 MiB/s [2024-12-05T03:10:22.872Z] Received shutdown signal, test time was about 55.298701 seconds 00:25:52.028 00:25:52.028 Latency(us) 00:25:52.028 [2024-12-05T03:10:22.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.028 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:52.028 Verification LBA range: start 0x0 length 0x4000 00:25:52.028 Nvme0n1 : 55.30 6740.68 26.33 0.00 0.00 18964.28 1482.01 7046430.72 00:25:52.028 [2024-12-05T03:10:22.872Z] =================================================================================================================== 00:25:52.028 [2024-12-05T03:10:22.872Z] Total : 6740.68 26.33 0.00 0.00 18964.28 1482.01 7046430.72 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.028 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:25:52.287 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.287 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:25:52.287 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.288 rmmod nvme_tcp 00:25:52.288 rmmod nvme_fabrics 00:25:52.288 rmmod nvme_keyring 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 86996 ']' 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 86996 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 86996 ']' 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 86996 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86996 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.288 killing process with pid 86996 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86996' 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 86996 00:25:52.288 03:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 86996 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:53.226 03:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:53.226 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:25:53.500 00:25:53.500 real 1m3.225s 00:25:53.500 user 2m55.563s 00:25:53.500 sys 0m16.802s 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:53.500 ************************************ 00:25:53.500 END TEST nvmf_host_multipath 00:25:53.500 ************************************ 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.500 ************************************ 00:25:53.500 START TEST nvmf_timeout 00:25:53.500 ************************************ 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:53.500 * Looking for test storage... 00:25:53.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:53.500 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:53.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.794 --rc genhtml_branch_coverage=1 00:25:53.794 --rc genhtml_function_coverage=1 00:25:53.794 --rc genhtml_legend=1 00:25:53.794 --rc geninfo_all_blocks=1 00:25:53.794 --rc geninfo_unexecuted_blocks=1 00:25:53.794 00:25:53.794 ' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:53.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.794 --rc genhtml_branch_coverage=1 00:25:53.794 --rc genhtml_function_coverage=1 00:25:53.794 --rc genhtml_legend=1 00:25:53.794 --rc geninfo_all_blocks=1 00:25:53.794 --rc geninfo_unexecuted_blocks=1 00:25:53.794 00:25:53.794 ' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:53.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.794 --rc genhtml_branch_coverage=1 00:25:53.794 --rc genhtml_function_coverage=1 00:25:53.794 --rc genhtml_legend=1 00:25:53.794 --rc geninfo_all_blocks=1 00:25:53.794 --rc geninfo_unexecuted_blocks=1 00:25:53.794 00:25:53.794 ' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:53.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.794 --rc genhtml_branch_coverage=1 00:25:53.794 --rc genhtml_function_coverage=1 00:25:53.794 --rc genhtml_legend=1 00:25:53.794 --rc geninfo_all_blocks=1 00:25:53.794 --rc geninfo_unexecuted_blocks=1 00:25:53.794 00:25:53.794 ' 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:25:53.794 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.795 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:53.795 Cannot find device "nvmf_init_br" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:53.795 Cannot find device "nvmf_init_br2" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:53.795 Cannot find device "nvmf_tgt_br" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.795 Cannot find device "nvmf_tgt_br2" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:53.795 Cannot find device "nvmf_init_br" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:53.795 Cannot find device "nvmf_init_br2" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:53.795 Cannot find device "nvmf_tgt_br" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:53.795 Cannot find device "nvmf_tgt_br2" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:53.795 Cannot find device "nvmf_br" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:53.795 Cannot find device "nvmf_init_if" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:53.795 Cannot find device "nvmf_init_if2" 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.795 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:54.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:25:54.053 00:25:54.053 --- 10.0.0.3 ping statistics --- 00:25:54.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.053 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:54.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:25:54.053 00:25:54.053 --- 10.0.0.4 ping statistics --- 00:25:54.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.053 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:54.053 00:25:54.053 --- 10.0.0.1 ping statistics --- 00:25:54.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.053 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:54.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:25:54.053 00:25:54.053 --- 10.0.0.2 ping statistics --- 00:25:54.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.053 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=88230 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 88230 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88230 ']' 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.053 03:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.311 [2024-12-05 03:10:24.937536] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:54.311 [2024-12-05 03:10:24.937689] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.311 [2024-12-05 03:10:25.126862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.578 [2024-12-05 03:10:25.252246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.578 [2024-12-05 03:10:25.252322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.578 [2024-12-05 03:10:25.252346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.578 [2024-12-05 03:10:25.252377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.578 [2024-12-05 03:10:25.252396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.578 [2024-12-05 03:10:25.254557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.578 [2024-12-05 03:10:25.254566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.841 [2024-12-05 03:10:25.456743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:55.098 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.098 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:55.098 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.098 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.098 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.355 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.356 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.356 03:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:55.613 [2024-12-05 03:10:26.213388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.613 03:10:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:55.871 Malloc0 00:25:55.871 03:10:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.129 03:10:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.388 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:56.648 [2024-12-05 03:10:27.283237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88279 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88279 /var/tmp/bdevperf.sock 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88279 ']' 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.648 03:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.648 [2024-12-05 03:10:27.397051] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:56.648 [2024-12-05 03:10:27.397228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88279 ] 00:25:56.906 [2024-12-05 03:10:27.566223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.906 [2024-12-05 03:10:27.653567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.166 [2024-12-05 03:10:27.806616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:57.733 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.733 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:57.733 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:57.733 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:58.301 NVMe0n1 00:25:58.301 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88304 00:25:58.301 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.301 03:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:58.301 Running I/O for 10 seconds... 00:25:59.239 03:10:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:59.502 6420.00 IOPS, 25.08 MiB/s [2024-12-05T03:10:30.346Z] [2024-12-05 03:10:30.158640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.502 [2024-12-05 03:10:30.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.502 [2024-12-05 03:10:30.158745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.502 [2024-12-05 03:10:30.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.502 [2024-12-05 03:10:30.158826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.502 [2024-12-05 03:10:30.158842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.502 [2024-12-05 03:10:30.158860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.502 [2024-12-05 03:10:30.158887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.502 [2024-12-05 03:10:30.158907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.502 [2024-12-05 03:10:30.158945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.502 [2024-12-05 03:10:30.158964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.158978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.159983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.159998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.160010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.160025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.160037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.160052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.160081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.160092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.160107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.503 [2024-12-05 03:10:30.160119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.503 [2024-12-05 03:10:30.160134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.504 [2024-12-05 03:10:30.160921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.160949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.160977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.160994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.504 [2024-12-05 03:10:30.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.504 [2024-12-05 03:10:30.161343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.505 [2024-12-05 03:10:30.161410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.505 [2024-12-05 03:10:30.161438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.505 [2024-12-05 03:10:30.161672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.161987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.161999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.505 [2024-12-05 03:10:30.162480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.505 [2024-12-05 03:10:30.162498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.506 [2024-12-05 03:10:30.162514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.506 [2024-12-05 03:10:30.162526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.506 [2024-12-05 03:10:30.162542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.506 [2024-12-05 03:10:30.162555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.506 [2024-12-05 03:10:30.162570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.506 [2024-12-05 03:10:30.162582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.506 [2024-12-05 03:10:30.162600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:25:59.506 [2024-12-05 03:10:30.162618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.506 [2024-12-05 03:10:30.162631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.506 [2024-12-05 03:10:30.162643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59984 len:8 PRP1 0x0 PRP2 0x0 00:25:59.506 [2024-12-05 03:10:30.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.506 [2024-12-05 03:10:30.163233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:59.506 [2024-12-05 03:10:30.163379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:59.506 [2024-12-05 03:10:30.163527] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.506 [2024-12-05 03:10:30.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:25:59.506 [2024-12-05 03:10:30.163590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:59.506 [2024-12-05 03:10:30.163618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:59.506 [2024-12-05 03:10:30.163645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:59.506 [2024-12-05 03:10:30.163659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:59.506 [2024-12-05 03:10:30.163676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:59.506 [2024-12-05 03:10:30.163691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:59.506 [2024-12-05 03:10:30.163711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:59.506 03:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:01.380 3722.50 IOPS, 14.54 MiB/s [2024-12-05T03:10:32.224Z] 2481.67 IOPS, 9.69 MiB/s [2024-12-05T03:10:32.224Z] [2024-12-05 03:10:32.163879] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.380 [2024-12-05 03:10:32.163947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:01.380 [2024-12-05 03:10:32.163971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:01.380 [2024-12-05 03:10:32.164001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:01.380 [2024-12-05 03:10:32.164036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.380 [2024-12-05 03:10:32.164051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.381 [2024-12-05 03:10:32.164068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.381 [2024-12-05 03:10:32.164083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.381 [2024-12-05 03:10:32.164099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.381 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:01.381 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.381 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:01.640 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:01.640 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:01.640 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:01.640 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:01.899 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:01.899 03:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:03.535 1861.25 IOPS, 7.27 MiB/s [2024-12-05T03:10:34.379Z] 1489.00 IOPS, 5.82 MiB/s [2024-12-05T03:10:34.379Z] [2024-12-05 03:10:34.164247] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-12-05 03:10:34.164322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:03.535 [2024-12-05 03:10:34.164346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:03.535 [2024-12-05 03:10:34.164376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:03.535 [2024-12-05 03:10:34.164406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:03.535 [2024-12-05 03:10:34.164420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:03.535 [2024-12-05 03:10:34.164436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:03.535 [2024-12-05 03:10:34.164452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:03.535 [2024-12-05 03:10:34.164468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:05.421 1240.83 IOPS, 4.85 MiB/s [2024-12-05T03:10:36.265Z] 1063.57 IOPS, 4.15 MiB/s [2024-12-05T03:10:36.265Z] [2024-12-05 03:10:36.164522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:05.421 [2024-12-05 03:10:36.164597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:05.421 [2024-12-05 03:10:36.164614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:05.421 [2024-12-05 03:10:36.164651] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:05.421 [2024-12-05 03:10:36.164672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:06.358 930.62 IOPS, 3.64 MiB/s 00:26:06.358 Latency(us) 00:26:06.358 [2024-12-05T03:10:37.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.358 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.358 Verification LBA range: start 0x0 length 0x4000 00:26:06.358 NVMe0n1 : 8.17 910.89 3.56 15.66 0.00 137921.07 4051.32 7015926.69 00:26:06.358 [2024-12-05T03:10:37.202Z] =================================================================================================================== 00:26:06.358 [2024-12-05T03:10:37.202Z] Total : 910.89 3.56 15.66 0.00 137921.07 4051.32 7015926.69 00:26:06.358 { 00:26:06.358 "results": [ 00:26:06.358 { 00:26:06.358 "job": "NVMe0n1", 00:26:06.358 "core_mask": "0x4", 00:26:06.358 "workload": "verify", 00:26:06.358 "status": "finished", 00:26:06.358 "verify_range": { 00:26:06.358 "start": 0, 00:26:06.358 "length": 16384 00:26:06.358 }, 00:26:06.358 "queue_depth": 128, 00:26:06.358 "io_size": 4096, 00:26:06.358 "runtime": 8.173291, 00:26:06.358 "iops": 910.8937880714145, 00:26:06.358 "mibps": 3.558178859653963, 00:26:06.358 "io_failed": 128, 00:26:06.358 "io_timeout": 0, 00:26:06.358 "avg_latency_us": 137921.07371691294, 00:26:06.358 "min_latency_us": 4051.316363636364, 00:26:06.358 "max_latency_us": 7015926.69090909 00:26:06.358 } 00:26:06.358 ], 00:26:06.358 "core_count": 1 00:26:06.358 } 00:26:06.926 03:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:06.926 03:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.926 03:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:07.185 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:07.185 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:07.185 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:07.185 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88304 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88279 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88279 ']' 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88279 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88279 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:07.445 killing process with pid 88279 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88279' 00:26:07.445 Received shutdown signal, test time was about 9.269768 seconds 00:26:07.445 00:26:07.445 Latency(us) 00:26:07.445 [2024-12-05T03:10:38.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.445 [2024-12-05T03:10:38.289Z] =================================================================================================================== 00:26:07.445 [2024-12-05T03:10:38.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88279 00:26:07.445 03:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88279 00:26:08.384 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:08.643 [2024-12-05 03:10:39.369389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88428 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88428 /var/tmp/bdevperf.sock 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88428 ']' 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.643 03:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.643 [2024-12-05 03:10:39.473869] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:08.643 [2024-12-05 03:10:39.474015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88428 ] 00:26:08.902 [2024-12-05 03:10:39.642895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.902 [2024-12-05 03:10:39.731976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.162 [2024-12-05 03:10:39.891543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:09.730 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.730 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:09.730 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:09.988 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:10.246 NVMe0n1 00:26:10.246 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88452 00:26:10.246 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.246 03:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:10.246 Running I/O for 10 seconds... 00:26:11.180 03:10:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:11.441 6437.00 IOPS, 25.14 MiB/s [2024-12-05T03:10:42.285Z] [2024-12-05 03:10:42.183699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.441 [2024-12-05 03:10:42.183801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.183842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.183877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.183896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.183909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.183932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.183946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.183962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.183975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.183992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.184005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.184022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.184034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.184084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.184098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.441 [2024-12-05 03:10:42.184115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.441 [2024-12-05 03:10:42.184129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.184968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.184982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.442 [2024-12-05 03:10:42.185395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.442 [2024-12-05 03:10:42.185408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.185983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.185997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.443 [2024-12-05 03:10:42.186665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.443 [2024-12-05 03:10:42.186682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.186968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.186988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.187518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.187974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.187988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.188005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.444 [2024-12-05 03:10:42.188019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.444 [2024-12-05 03:10:42.188036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.444 [2024-12-05 03:10:42.188049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:11.445 [2024-12-05 03:10:42.188087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.445 [2024-12-05 03:10:42.188102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.445 [2024-12-05 03:10:42.188115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59552 len:8 PRP1 0x0 PRP2 0x0 00:26:11.445 [2024-12-05 03:10:42.188131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.445 [2024-12-05 03:10:42.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.445 [2024-12-05 03:10:42.188559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.445 [2024-12-05 03:10:42.188588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.445 [2024-12-05 03:10:42.188616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.445 [2024-12-05 03:10:42.188631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:11.445 [2024-12-05 03:10:42.188881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.445 [2024-12-05 03:10:42.188935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:11.445 [2024-12-05 03:10:42.189075] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.445 [2024-12-05 03:10:42.189108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:11.445 [2024-12-05 03:10:42.189128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:11.445 [2024-12-05 03:10:42.189155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:11.445 [2024-12-05 03:10:42.189184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.445 [2024-12-05 03:10:42.189199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.445 [2024-12-05 03:10:42.189218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.445 [2024-12-05 03:10:42.189235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.445 [2024-12-05 03:10:42.189253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.445 03:10:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:12.379 3658.50 IOPS, 14.29 MiB/s [2024-12-05T03:10:43.223Z] [2024-12-05 03:10:43.189393] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.379 [2024-12-05 03:10:43.189480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:12.379 [2024-12-05 03:10:43.189503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:12.379 [2024-12-05 03:10:43.189534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:12.379 [2024-12-05 03:10:43.189563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.379 [2024-12-05 03:10:43.189578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.379 [2024-12-05 03:10:43.189595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.379 [2024-12-05 03:10:43.189610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.379 [2024-12-05 03:10:43.189626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.379 03:10:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:12.636 [2024-12-05 03:10:43.459268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:12.894 03:10:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88452 00:26:13.460 2439.00 IOPS, 9.53 MiB/s [2024-12-05T03:10:44.304Z] [2024-12-05 03:10:44.206680] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:15.323 1829.25 IOPS, 7.15 MiB/s [2024-12-05T03:10:47.101Z] 2963.60 IOPS, 11.58 MiB/s [2024-12-05T03:10:48.476Z] 3911.50 IOPS, 15.28 MiB/s [2024-12-05T03:10:49.412Z] 4582.00 IOPS, 17.90 MiB/s [2024-12-05T03:10:50.390Z] 5086.88 IOPS, 19.87 MiB/s [2024-12-05T03:10:51.324Z] 5478.56 IOPS, 21.40 MiB/s [2024-12-05T03:10:51.324Z] 5799.10 IOPS, 22.65 MiB/s 00:26:20.480 Latency(us) 00:26:20.480 [2024-12-05T03:10:51.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.480 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.480 Verification LBA range: start 0x0 length 0x4000 00:26:20.480 NVMe0n1 : 10.01 5804.07 22.67 0.00 0.00 22008.06 1199.01 3019898.88 00:26:20.480 [2024-12-05T03:10:51.324Z] =================================================================================================================== 00:26:20.480 [2024-12-05T03:10:51.324Z] Total : 5804.07 22.67 0.00 0.00 22008.06 1199.01 3019898.88 00:26:20.480 { 00:26:20.480 "results": [ 00:26:20.480 { 00:26:20.480 "job": "NVMe0n1", 00:26:20.480 "core_mask": "0x4", 00:26:20.480 "workload": "verify", 00:26:20.480 "status": "finished", 00:26:20.480 "verify_range": { 00:26:20.480 "start": 0, 00:26:20.480 "length": 16384 00:26:20.480 }, 00:26:20.480 "queue_depth": 128, 00:26:20.480 "io_size": 4096, 00:26:20.480 "runtime": 10.007109, 00:26:20.480 "iops": 5804.073883875953, 00:26:20.480 "mibps": 22.67216360889044, 00:26:20.480 "io_failed": 0, 00:26:20.480 "io_timeout": 0, 00:26:20.480 "avg_latency_us": 22008.05619541025, 00:26:20.480 "min_latency_us": 1199.010909090909, 00:26:20.480 "max_latency_us": 3019898.88 00:26:20.480 } 00:26:20.480 ], 00:26:20.480 "core_count": 1 00:26:20.480 } 00:26:20.480 03:10:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88557 00:26:20.480 03:10:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.480 03:10:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:20.480 Running I/O for 10 seconds... 00:26:21.414 03:10:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:21.676 6439.00 IOPS, 25.15 MiB/s [2024-12-05T03:10:52.520Z] [2024-12-05 03:10:52.356829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.356932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.356950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.356967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.356978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.356991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.676 [2024-12-05 03:10:52.357886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.357999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:21.677 [2024-12-05 03:10:52.358372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.358979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.358995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.359009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.359024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.359052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.359065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.677 [2024-12-05 03:10:52.359080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.677 [2024-12-05 03:10:52.359094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.359982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.359994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.678 [2024-12-05 03:10:52.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.678 [2024-12-05 03:10:52.360218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.360978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.360993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.679 [2024-12-05 03:10:52.361419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.679 [2024-12-05 03:10:52.361433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.361656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.361982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.361994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.362020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.362046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.680 [2024-12-05 03:10:52.362072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.680 [2024-12-05 03:10:52.362099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:21.680 [2024-12-05 03:10:52.362129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.680 [2024-12-05 03:10:52.362140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.680 [2024-12-05 03:10:52.362151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60720 len:8 PRP1 0x0 PRP2 0x0 00:26:21.680 [2024-12-05 03:10:52.362164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.680 [2024-12-05 03:10:52.362643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:21.680 [2024-12-05 03:10:52.362781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:21.680 [2024-12-05 03:10:52.362948] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.680 [2024-12-05 03:10:52.362995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:21.680 [2024-12-05 03:10:52.363012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:21.680 [2024-12-05 03:10:52.363039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:21.680 [2024-12-05 03:10:52.363063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:21.680 [2024-12-05 03:10:52.363077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:21.680 [2024-12-05 03:10:52.363091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:21.680 [2024-12-05 03:10:52.363106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:21.681 [2024-12-05 03:10:52.363120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:21.681 03:10:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:22.615 3739.00 IOPS, 14.61 MiB/s [2024-12-05T03:10:53.459Z] [2024-12-05 03:10:53.363277] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.615 [2024-12-05 03:10:53.363382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:22.615 [2024-12-05 03:10:53.363403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:22.615 [2024-12-05 03:10:53.363433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:22.615 [2024-12-05 03:10:53.363458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:22.615 [2024-12-05 03:10:53.363471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:22.615 [2024-12-05 03:10:53.363485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:22.615 [2024-12-05 03:10:53.363499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:22.615 [2024-12-05 03:10:53.363512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:23.548 2492.67 IOPS, 9.74 MiB/s [2024-12-05T03:10:54.392Z] [2024-12-05 03:10:54.363669] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-12-05 03:10:54.363760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:23.548 [2024-12-05 03:10:54.363794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:23.548 [2024-12-05 03:10:54.363826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:23.548 [2024-12-05 03:10:54.363851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:23.548 [2024-12-05 03:10:54.363864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:23.548 [2024-12-05 03:10:54.363877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:23.548 [2024-12-05 03:10:54.363891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:23.548 [2024-12-05 03:10:54.363905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:24.749 1869.50 IOPS, 7.30 MiB/s [2024-12-05T03:10:55.593Z] [2024-12-05 03:10:55.366875] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.749 [2024-12-05 03:10:55.367004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:24.749 [2024-12-05 03:10:55.367026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:24.749 [2024-12-05 03:10:55.367350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:24.749 [2024-12-05 03:10:55.367612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:24.749 [2024-12-05 03:10:55.367641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:24.749 [2024-12-05 03:10:55.367657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:24.749 [2024-12-05 03:10:55.367672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:24.749 [2024-12-05 03:10:55.367686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:24.749 03:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:25.006 [2024-12-05 03:10:55.636205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.006 03:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88557 00:26:25.573 1495.60 IOPS, 5.84 MiB/s [2024-12-05T03:10:56.417Z] [2024-12-05 03:10:56.395944] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:26:27.442 2395.50 IOPS, 9.36 MiB/s [2024-12-05T03:10:59.219Z] 3283.14 IOPS, 12.82 MiB/s [2024-12-05T03:11:00.596Z] 3956.62 IOPS, 15.46 MiB/s [2024-12-05T03:11:01.531Z] 4476.33 IOPS, 17.49 MiB/s [2024-12-05T03:11:01.531Z] 4901.50 IOPS, 19.15 MiB/s 00:26:30.687 Latency(us) 00:26:30.687 [2024-12-05T03:11:01.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.687 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:30.687 Verification LBA range: start 0x0 length 0x4000 00:26:30.687 NVMe0n1 : 10.01 4906.43 19.17 4060.87 0.00 14237.41 789.41 3019898.88 00:26:30.687 [2024-12-05T03:11:01.531Z] =================================================================================================================== 00:26:30.687 [2024-12-05T03:11:01.531Z] Total : 4906.43 19.17 4060.87 0.00 14237.41 0.00 3019898.88 00:26:30.687 { 00:26:30.687 "results": [ 00:26:30.687 { 00:26:30.687 "job": "NVMe0n1", 00:26:30.687 "core_mask": "0x4", 00:26:30.687 "workload": "verify", 00:26:30.687 "status": "finished", 00:26:30.687 "verify_range": { 00:26:30.687 "start": 0, 00:26:30.687 "length": 16384 00:26:30.687 }, 00:26:30.687 "queue_depth": 128, 00:26:30.687 "io_size": 4096, 00:26:30.687 "runtime": 10.011143, 00:26:30.687 "iops": 4906.432761973333, 00:26:30.687 "mibps": 19.165752976458332, 00:26:30.687 "io_failed": 40654, 00:26:30.687 "io_timeout": 0, 00:26:30.687 "avg_latency_us": 14237.408459599617, 00:26:30.687 "min_latency_us": 789.4109090909091, 00:26:30.687 "max_latency_us": 3019898.88 00:26:30.687 } 00:26:30.687 ], 00:26:30.687 "core_count": 1 00:26:30.687 } 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88428 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88428 ']' 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88428 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88428 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:30.687 killing process with pid 88428 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88428' 00:26:30.687 Received shutdown signal, test time was about 10.000000 seconds 00:26:30.687 00:26:30.687 Latency(us) 00:26:30.687 [2024-12-05T03:11:01.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.687 [2024-12-05T03:11:01.531Z] =================================================================================================================== 00:26:30.687 [2024-12-05T03:11:01.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88428 00:26:30.687 03:11:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88428 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88676 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88676 /var/tmp/bdevperf.sock 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88676 ']' 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.623 03:11:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.623 [2024-12-05 03:11:02.207180] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:31.623 [2024-12-05 03:11:02.207373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88676 ] 00:26:31.623 [2024-12-05 03:11:02.372566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.623 [2024-12-05 03:11:02.457863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.882 [2024-12-05 03:11:02.609862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:32.449 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.449 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:32.449 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:32.449 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88692 00:26:32.449 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:32.708 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:32.966 NVMe0n1 00:26:32.966 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88734 00:26:32.966 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:32.966 03:11:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:33.224 Running I/O for 10 seconds... 00:26:34.167 03:11:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:34.167 13716.00 IOPS, 53.58 MiB/s [2024-12-05T03:11:05.011Z] [2024-12-05 03:11:04.933386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.933990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.934001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.934017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.934044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.934060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.167 [2024-12-05 03:11:04.934072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.934991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.168 [2024-12-05 03:11:04.935172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.169 [2024-12-05 03:11:04.935187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.169 [2024-12-05 03:11:04.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.935972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.935991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.169 [2024-12-05 03:11:04.936565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.169 [2024-12-05 03:11:04.936578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.936968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.936981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.170 [2024-12-05 03:11:04.937890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.170 [2024-12-05 03:11:04.937903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.937920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.937933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.937950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.937966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.937983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.937997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.171 [2024-12-05 03:11:04.938968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.171 [2024-12-05 03:11:04.938989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.172 [2024-12-05 03:11:04.939459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:34.172 [2024-12-05 03:11:04.939493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.172 [2024-12-05 03:11:04.939508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.172 [2024-12-05 03:11:04.939523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7608 len:8 PRP1 0x0 PRP2 0x0 00:26:34.172 [2024-12-05 03:11:04.939539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.172 [2024-12-05 03:11:04.939945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.172 [2024-12-05 03:11:04.939980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.939996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.172 [2024-12-05 03:11:04.940009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.940024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.172 [2024-12-05 03:11:04.940037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.172 [2024-12-05 03:11:04.940051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:34.172 [2024-12-05 03:11:04.940366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:34.172 [2024-12-05 03:11:04.940439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.172 [2024-12-05 03:11:04.940599] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.172 [2024-12-05 03:11:04.940633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:34.172 [2024-12-05 03:11:04.940655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:34.172 [2024-12-05 03:11:04.940684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.172 [2024-12-05 03:11:04.940716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:34.172 [2024-12-05 03:11:04.940732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:34.172 [2024-12-05 03:11:04.940750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:34.172 [2024-12-05 03:11:04.940785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:34.172 [2024-12-05 03:11:04.940805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:34.172 03:11:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88734 00:26:36.036 7557.50 IOPS, 29.52 MiB/s [2024-12-05T03:11:07.139Z] 5038.33 IOPS, 19.68 MiB/s [2024-12-05T03:11:07.139Z] [2024-12-05 03:11:06.951444] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.295 [2024-12-05 03:11:06.951538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:36.295 [2024-12-05 03:11:06.951564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:36.295 [2024-12-05 03:11:06.951598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:36.295 [2024-12-05 03:11:06.951631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:36.295 [2024-12-05 03:11:06.951646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:36.295 [2024-12-05 03:11:06.951662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:36.295 [2024-12-05 03:11:06.951678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:36.295 [2024-12-05 03:11:06.951695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:38.164 3778.75 IOPS, 14.76 MiB/s [2024-12-05T03:11:09.008Z] 3023.00 IOPS, 11.81 MiB/s [2024-12-05T03:11:09.008Z] [2024-12-05 03:11:08.951946] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.164 [2024-12-05 03:11:08.952051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:38.164 [2024-12-05 03:11:08.952096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:38.164 [2024-12-05 03:11:08.952146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:38.164 [2024-12-05 03:11:08.952180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:38.165 [2024-12-05 03:11:08.952195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:38.165 [2024-12-05 03:11:08.952212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:38.165 [2024-12-05 03:11:08.952229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:38.165 [2024-12-05 03:11:08.952245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:40.031 2519.17 IOPS, 9.84 MiB/s [2024-12-05T03:11:11.134Z] 2159.29 IOPS, 8.43 MiB/s [2024-12-05T03:11:11.134Z] [2024-12-05 03:11:10.952378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:40.290 [2024-12-05 03:11:10.952467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:40.290 [2024-12-05 03:11:10.952484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:40.290 [2024-12-05 03:11:10.952503] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:26:40.290 [2024-12-05 03:11:10.952520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:41.224 1889.38 IOPS, 7.38 MiB/s 00:26:41.224 Latency(us) 00:26:41.224 [2024-12-05T03:11:12.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.224 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:41.224 NVMe0n1 : 8.12 1860.58 7.27 15.76 0.00 68324.76 8817.57 7046430.72 00:26:41.224 [2024-12-05T03:11:12.068Z] =================================================================================================================== 00:26:41.224 [2024-12-05T03:11:12.068Z] Total : 1860.58 7.27 15.76 0.00 68324.76 8817.57 7046430.72 00:26:41.224 { 00:26:41.224 "results": [ 00:26:41.224 { 00:26:41.224 "job": "NVMe0n1", 00:26:41.224 "core_mask": "0x4", 00:26:41.224 "workload": "randread", 00:26:41.224 "status": "finished", 00:26:41.224 "queue_depth": 128, 00:26:41.224 "io_size": 4096, 00:26:41.224 "runtime": 8.123817, 00:26:41.224 "iops": 1860.5785925507678, 00:26:41.224 "mibps": 7.267885127151437, 00:26:41.224 "io_failed": 128, 00:26:41.224 "io_timeout": 0, 00:26:41.224 "avg_latency_us": 68324.76154992158, 00:26:41.224 "min_latency_us": 8817.57090909091, 00:26:41.224 "max_latency_us": 7046430.72 00:26:41.224 } 00:26:41.224 ], 00:26:41.224 "core_count": 1 00:26:41.224 } 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:41.224 Attaching 5 probes... 00:26:41.224 1298.378026: reset bdev controller NVMe0 00:26:41.224 1298.525513: reconnect bdev controller NVMe0 00:26:41.224 3309.272401: reconnect delay bdev controller NVMe0 00:26:41.224 3309.324734: reconnect bdev controller NVMe0 00:26:41.224 5309.781229: reconnect delay bdev controller NVMe0 00:26:41.224 5309.843846: reconnect bdev controller NVMe0 00:26:41.224 7310.352039: reconnect delay bdev controller NVMe0 00:26:41.224 7310.389157: reconnect bdev controller NVMe0 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88692 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88676 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88676 ']' 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88676 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.224 03:11:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88676 00:26:41.224 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:41.224 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:41.224 killing process with pid 88676 00:26:41.224 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88676' 00:26:41.224 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88676 00:26:41.224 Received shutdown signal, test time was about 8.191922 seconds 00:26:41.224 00:26:41.224 Latency(us) 00:26:41.224 [2024-12-05T03:11:12.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.224 [2024-12-05T03:11:12.068Z] =================================================================================================================== 00:26:41.224 [2024-12-05T03:11:12.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.224 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88676 00:26:42.159 03:11:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.417 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.417 rmmod nvme_tcp 00:26:42.417 rmmod nvme_fabrics 00:26:42.445 rmmod nvme_keyring 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 88230 ']' 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 88230 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88230 ']' 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88230 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:42.445 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88230 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.704 killing process with pid 88230 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88230' 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88230 00:26:42.704 03:11:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88230 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:43.640 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:26:43.641 ************************************ 00:26:43.641 END TEST nvmf_timeout 00:26:43.641 ************************************ 00:26:43.641 00:26:43.641 real 0m50.250s 00:26:43.641 user 2m25.916s 00:26:43.641 sys 0m5.545s 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.641 03:11:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 03:11:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:26:43.900 03:11:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:43.900 00:26:43.900 real 6m22.519s 00:26:43.900 user 17m41.720s 00:26:43.900 sys 1m16.315s 00:26:43.900 03:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.900 03:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 ************************************ 00:26:43.900 END TEST nvmf_host 00:26:43.900 ************************************ 00:26:43.900 03:11:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:43.900 03:11:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:26:43.900 00:26:43.900 real 17m4.557s 00:26:43.900 user 44m27.548s 00:26:43.900 sys 4m1.980s 00:26:43.900 03:11:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.900 ************************************ 00:26:43.900 END TEST nvmf_tcp 00:26:43.900 ************************************ 00:26:43.900 03:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 03:11:14 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:26:43.900 03:11:14 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:43.900 03:11:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:43.900 03:11:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.900 03:11:14 -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 ************************************ 00:26:43.900 START TEST nvmf_dif 00:26:43.900 ************************************ 00:26:43.900 03:11:14 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:43.900 * Looking for test storage... 00:26:43.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:43.900 03:11:14 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:43.900 03:11:14 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:43.900 03:11:14 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:26:44.159 03:11:14 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.159 03:11:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.160 --rc genhtml_branch_coverage=1 00:26:44.160 --rc genhtml_function_coverage=1 00:26:44.160 --rc genhtml_legend=1 00:26:44.160 --rc geninfo_all_blocks=1 00:26:44.160 --rc geninfo_unexecuted_blocks=1 00:26:44.160 00:26:44.160 ' 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.160 --rc genhtml_branch_coverage=1 00:26:44.160 --rc genhtml_function_coverage=1 00:26:44.160 --rc genhtml_legend=1 00:26:44.160 --rc geninfo_all_blocks=1 00:26:44.160 --rc geninfo_unexecuted_blocks=1 00:26:44.160 00:26:44.160 ' 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.160 --rc genhtml_branch_coverage=1 00:26:44.160 --rc genhtml_function_coverage=1 00:26:44.160 --rc genhtml_legend=1 00:26:44.160 --rc geninfo_all_blocks=1 00:26:44.160 --rc geninfo_unexecuted_blocks=1 00:26:44.160 00:26:44.160 ' 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.160 --rc genhtml_branch_coverage=1 00:26:44.160 --rc genhtml_function_coverage=1 00:26:44.160 --rc genhtml_legend=1 00:26:44.160 --rc geninfo_all_blocks=1 00:26:44.160 --rc geninfo_unexecuted_blocks=1 00:26:44.160 00:26:44.160 ' 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.160 03:11:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.160 03:11:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.160 03:11:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.160 03:11:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.160 03:11:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:44.160 03:11:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:44.160 03:11:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.160 03:11:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:44.160 Cannot find device "nvmf_init_br" 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:44.160 03:11:14 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:44.161 Cannot find device "nvmf_init_br2" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:44.161 Cannot find device "nvmf_tgt_br" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@164 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:44.161 Cannot find device "nvmf_tgt_br2" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@165 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:44.161 Cannot find device "nvmf_init_br" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@166 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:44.161 Cannot find device "nvmf_init_br2" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@167 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:44.161 Cannot find device "nvmf_tgt_br" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@168 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:44.161 Cannot find device "nvmf_tgt_br2" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@169 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:44.161 Cannot find device "nvmf_br" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@170 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:44.161 Cannot find device "nvmf_init_if" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@171 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:44.161 Cannot find device "nvmf_init_if2" 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@172 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:44.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@173 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:44.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@174 -- # true 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:44.161 03:11:14 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:44.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:44.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:26:44.426 00:26:44.426 --- 10.0.0.3 ping statistics --- 00:26:44.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.426 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:44.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:44.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:26:44.426 00:26:44.426 --- 10.0.0.4 ping statistics --- 00:26:44.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.426 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:44.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:44.426 00:26:44.426 --- 10.0.0.1 ping statistics --- 00:26:44.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.426 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:44.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:26:44.426 00:26:44.426 --- 10.0.0.2 ping statistics --- 00:26:44.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.426 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:44.426 03:11:15 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:44.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:44.992 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:44.992 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.992 03:11:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:44.992 03:11:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=89246 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 89246 00:26:44.992 03:11:15 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 89246 ']' 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.992 03:11:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:44.992 [2024-12-05 03:11:15.746552] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:44.992 [2024-12-05 03:11:15.746721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.250 [2024-12-05 03:11:15.935581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.250 [2024-12-05 03:11:16.060096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.250 [2024-12-05 03:11:16.060175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.250 [2024-12-05 03:11:16.060200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.250 [2024-12-05 03:11:16.060234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.250 [2024-12-05 03:11:16.060252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.250 [2024-12-05 03:11:16.061675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.508 [2024-12-05 03:11:16.282702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:26:46.129 03:11:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 03:11:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.129 03:11:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:46.129 03:11:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 [2024-12-05 03:11:16.774825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.129 03:11:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 ************************************ 00:26:46.129 START TEST fio_dif_1_default 00:26:46.129 ************************************ 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 bdev_null0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.129 [2024-12-05 03:11:16.819090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.129 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:46.130 { 00:26:46.130 "params": { 00:26:46.130 "name": "Nvme$subsystem", 00:26:46.130 "trtype": "$TEST_TRANSPORT", 00:26:46.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.130 "adrfam": "ipv4", 00:26:46.130 "trsvcid": "$NVMF_PORT", 00:26:46.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.130 "hdgst": ${hdgst:-false}, 00:26:46.130 "ddgst": ${ddgst:-false} 00:26:46.130 }, 00:26:46.130 "method": "bdev_nvme_attach_controller" 00:26:46.130 } 00:26:46.130 EOF 00:26:46.130 )") 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:46.130 "params": { 00:26:46.130 "name": "Nvme0", 00:26:46.130 "trtype": "tcp", 00:26:46.130 "traddr": "10.0.0.3", 00:26:46.130 "adrfam": "ipv4", 00:26:46.130 "trsvcid": "4420", 00:26:46.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:46.130 "hdgst": false, 00:26:46.130 "ddgst": false 00:26:46.130 }, 00:26:46.130 "method": "bdev_nvme_attach_controller" 00:26:46.130 }' 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:46.130 03:11:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.393 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:46.393 fio-3.35 00:26:46.393 Starting 1 thread 00:26:58.602 00:26:58.602 filename0: (groupid=0, jobs=1): err= 0: pid=89305: Thu Dec 5 03:11:27 2024 00:26:58.602 read: IOPS=7848, BW=30.7MiB/s (32.1MB/s)(307MiB/10001msec) 00:26:58.602 slat (usec): min=7, max=121, avg=10.11, stdev= 4.60 00:26:58.602 clat (usec): min=399, max=2961, avg=478.88, stdev=51.02 00:26:58.602 lat (usec): min=406, max=2984, avg=488.99, stdev=52.11 00:26:58.602 clat percentiles (usec): 00:26:58.602 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 441], 00:26:58.602 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 482], 00:26:58.602 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 562], 00:26:58.602 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 750], 99.95th=[ 857], 00:26:58.602 | 99.99th=[ 1778] 00:26:58.602 bw ( KiB/s): min=29760, max=32192, per=100.00%, avg=31398.74, stdev=610.67, samples=19 00:26:58.602 iops : min= 7440, max= 8048, avg=7849.68, stdev=152.67, samples=19 00:26:58.602 lat (usec) : 500=74.77%, 750=25.13%, 1000=0.05% 00:26:58.602 lat (msec) : 2=0.04%, 4=0.01% 00:26:58.602 cpu : usr=86.14%, sys=11.95%, ctx=97, majf=0, minf=1061 00:26:58.602 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.602 issued rwts: total=78492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.602 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:58.602 00:26:58.602 Run status group 0 (all jobs): 00:26:58.602 READ: bw=30.7MiB/s (32.1MB/s), 30.7MiB/s-30.7MiB/s (32.1MB/s-32.1MB/s), io=307MiB (322MB), run=10001-10001msec 00:26:58.602 ----------------------------------------------------- 00:26:58.602 Suppressions used: 00:26:58.602 count bytes template 00:26:58.602 1 8 /usr/src/fio/parse.c 00:26:58.602 1 8 libtcmalloc_minimal.so 00:26:58.602 1 904 libcrypto.so 00:26:58.602 ----------------------------------------------------- 00:26:58.602 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 00:26:58.602 real 0m12.196s 00:26:58.602 user 0m10.405s 00:26:58.602 sys 0m1.527s 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.602 ************************************ 00:26:58.602 03:11:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 END TEST fio_dif_1_default 00:26:58.602 ************************************ 00:26:58.602 03:11:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:58.602 03:11:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:58.602 03:11:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 ************************************ 00:26:58.602 START TEST fio_dif_1_multi_subsystems 00:26:58.602 ************************************ 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 bdev_null0 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 [2024-12-05 03:11:29.073708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 bdev_null1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:58.602 { 00:26:58.602 "params": { 00:26:58.602 "name": "Nvme$subsystem", 00:26:58.602 "trtype": "$TEST_TRANSPORT", 00:26:58.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:58.602 "adrfam": "ipv4", 00:26:58.602 "trsvcid": "$NVMF_PORT", 00:26:58.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:58.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:58.602 "hdgst": ${hdgst:-false}, 00:26:58.602 "ddgst": ${ddgst:-false} 00:26:58.602 }, 00:26:58.602 "method": "bdev_nvme_attach_controller" 00:26:58.602 } 00:26:58.602 EOF 00:26:58.602 )") 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:58.602 { 00:26:58.602 "params": { 00:26:58.602 "name": "Nvme$subsystem", 00:26:58.602 "trtype": "$TEST_TRANSPORT", 00:26:58.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:58.602 "adrfam": "ipv4", 00:26:58.602 "trsvcid": "$NVMF_PORT", 00:26:58.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:58.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:58.602 "hdgst": ${hdgst:-false}, 00:26:58.602 "ddgst": ${ddgst:-false} 00:26:58.602 }, 00:26:58.602 "method": "bdev_nvme_attach_controller" 00:26:58.602 } 00:26:58.602 EOF 00:26:58.602 )") 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:58.602 "params": { 00:26:58.602 "name": "Nvme0", 00:26:58.602 "trtype": "tcp", 00:26:58.602 "traddr": "10.0.0.3", 00:26:58.602 "adrfam": "ipv4", 00:26:58.602 "trsvcid": "4420", 00:26:58.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:58.602 "hdgst": false, 00:26:58.602 "ddgst": false 00:26:58.602 }, 00:26:58.602 "method": "bdev_nvme_attach_controller" 00:26:58.602 },{ 00:26:58.602 "params": { 00:26:58.602 "name": "Nvme1", 00:26:58.602 "trtype": "tcp", 00:26:58.602 "traddr": "10.0.0.3", 00:26:58.602 "adrfam": "ipv4", 00:26:58.602 "trsvcid": "4420", 00:26:58.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:58.602 "hdgst": false, 00:26:58.602 "ddgst": false 00:26:58.602 }, 00:26:58.602 "method": "bdev_nvme_attach_controller" 00:26:58.602 }' 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:58.602 03:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:58.602 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:58.602 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:58.602 fio-3.35 00:26:58.602 Starting 2 threads 00:27:10.805 00:27:10.805 filename0: (groupid=0, jobs=1): err= 0: pid=89469: Thu Dec 5 03:11:40 2024 00:27:10.805 read: IOPS=4303, BW=16.8MiB/s (17.6MB/s)(168MiB/10001msec) 00:27:10.805 slat (nsec): min=7728, max=71242, avg=14493.39, stdev=4911.57 00:27:10.805 clat (usec): min=494, max=1610, avg=889.13, stdev=68.96 00:27:10.805 lat (usec): min=502, max=1627, avg=903.63, stdev=70.25 00:27:10.805 clat percentiles (usec): 00:27:10.805 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 807], 20.00th=[ 832], 00:27:10.805 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:27:10.805 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1012], 00:27:10.805 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1205], 99.95th=[ 1270], 00:27:10.805 | 99.99th=[ 1565] 00:27:10.805 bw ( KiB/s): min=17088, max=17408, per=50.07%, avg=17237.89, stdev=102.34, samples=19 00:27:10.805 iops : min= 4272, max= 4352, avg=4309.47, stdev=25.59, samples=19 00:27:10.805 lat (usec) : 500=0.01%, 750=0.86%, 1000=92.91% 00:27:10.805 lat (msec) : 2=6.23% 00:27:10.805 cpu : usr=90.84%, sys=7.79%, ctx=15, majf=0, minf=1061 00:27:10.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.805 issued rwts: total=43036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:10.805 filename1: (groupid=0, jobs=1): err= 0: pid=89470: Thu Dec 5 03:11:40 2024 00:27:10.805 read: IOPS=4302, BW=16.8MiB/s (17.6MB/s)(168MiB/10001msec) 00:27:10.805 slat (nsec): min=7638, max=69556, avg=14542.22, stdev=5091.05 00:27:10.805 clat (usec): min=515, max=1686, avg=888.62, stdev=59.83 00:27:10.805 lat (usec): min=524, max=1723, avg=903.16, stdev=60.66 00:27:10.805 clat percentiles (usec): 00:27:10.805 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 824], 20.00th=[ 840], 00:27:10.805 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[ 873], 60.00th=[ 889], 00:27:10.805 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 1004], 00:27:10.805 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1205], 99.95th=[ 1336], 00:27:10.805 | 99.99th=[ 1549] 00:27:10.805 bw ( KiB/s): min=17088, max=17408, per=50.07%, avg=17236.32, stdev=100.83, samples=19 00:27:10.805 iops : min= 4272, max= 4352, avg=4309.05, stdev=25.19, samples=19 00:27:10.805 lat (usec) : 750=0.01%, 1000=95.07% 00:27:10.805 lat (msec) : 2=4.91% 00:27:10.805 cpu : usr=90.97%, sys=7.65%, ctx=26, majf=0, minf=1062 00:27:10.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.805 issued rwts: total=43032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:10.805 00:27:10.805 Run status group 0 (all jobs): 00:27:10.805 READ: bw=33.6MiB/s (35.2MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=336MiB (353MB), run=10001-10001msec 00:27:10.805 ----------------------------------------------------- 00:27:10.805 Suppressions used: 00:27:10.805 count bytes template 00:27:10.805 2 16 /usr/src/fio/parse.c 00:27:10.805 1 8 libtcmalloc_minimal.so 00:27:10.805 1 904 libcrypto.so 00:27:10.805 ----------------------------------------------------- 00:27:10.805 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:10.805 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.805 00:27:10.805 real 0m12.274s 00:27:10.805 user 0m20.000s 00:27:10.806 sys 0m1.885s 00:27:10.806 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.806 ************************************ 00:27:10.806 END TEST fio_dif_1_multi_subsystems 00:27:10.806 03:11:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 ************************************ 00:27:10.806 03:11:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:10.806 03:11:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.806 03:11:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.806 03:11:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 ************************************ 00:27:10.806 START TEST fio_dif_rand_params 00:27:10.806 ************************************ 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 bdev_null0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.806 [2024-12-05 03:11:41.401308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:10.806 { 00:27:10.806 "params": { 00:27:10.806 "name": "Nvme$subsystem", 00:27:10.806 "trtype": "$TEST_TRANSPORT", 00:27:10.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.806 "adrfam": "ipv4", 00:27:10.806 "trsvcid": "$NVMF_PORT", 00:27:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.806 "hdgst": ${hdgst:-false}, 00:27:10.806 "ddgst": ${ddgst:-false} 00:27:10.806 }, 00:27:10.806 "method": "bdev_nvme_attach_controller" 00:27:10.806 } 00:27:10.806 EOF 00:27:10.806 )") 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:10.806 "params": { 00:27:10.806 "name": "Nvme0", 00:27:10.806 "trtype": "tcp", 00:27:10.806 "traddr": "10.0.0.3", 00:27:10.806 "adrfam": "ipv4", 00:27:10.806 "trsvcid": "4420", 00:27:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.806 "hdgst": false, 00:27:10.806 "ddgst": false 00:27:10.806 }, 00:27:10.806 "method": "bdev_nvme_attach_controller" 00:27:10.806 }' 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:10.806 03:11:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.066 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:11.066 ... 00:27:11.066 fio-3.35 00:27:11.066 Starting 3 threads 00:27:17.632 00:27:17.633 filename0: (groupid=0, jobs=1): err= 0: pid=89625: Thu Dec 5 03:11:47 2024 00:27:17.633 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5001msec) 00:27:17.633 slat (nsec): min=5697, max=64972, avg=18095.36, stdev=6135.98 00:27:17.633 clat (usec): min=12689, max=17034, avg=13292.51, stdev=509.43 00:27:17.633 lat (usec): min=12703, max=17058, avg=13310.60, stdev=509.70 00:27:17.633 clat percentiles (usec): 00:27:17.633 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12911], 20.00th=[12911], 00:27:17.633 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:27:17.633 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:27:17.633 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16909], 99.95th=[16909], 00:27:17.633 | 99.99th=[16909] 00:27:17.633 bw ( KiB/s): min=27648, max=29184, per=33.28%, avg=28757.33, stdev=557.94, samples=9 00:27:17.633 iops : min= 216, max= 228, avg=224.67, stdev= 4.36, samples=9 00:27:17.633 lat (msec) : 20=100.00% 00:27:17.633 cpu : usr=92.32%, sys=7.04%, ctx=10, majf=0, minf=1075 00:27:17.633 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.633 filename0: (groupid=0, jobs=1): err= 0: pid=89626: Thu Dec 5 03:11:47 2024 00:27:17.633 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(141MiB/5004msec) 00:27:17.633 slat (nsec): min=5514, max=60829, avg=18111.77, stdev=6221.78 00:27:17.633 clat (usec): min=4837, max=15719, avg=13266.44, stdev=652.74 00:27:17.633 lat (usec): min=4845, max=15741, avg=13284.55, stdev=653.09 00:27:17.633 clat percentiles (usec): 00:27:17.633 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12911], 20.00th=[12911], 00:27:17.633 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:27:17.633 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:27:17.633 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15664], 99.95th=[15664], 00:27:17.633 | 99.99th=[15664] 00:27:17.633 bw ( KiB/s): min=28359, max=29184, per=33.27%, avg=28751.00, stdev=411.18, samples=9 00:27:17.633 iops : min= 221, max= 228, avg=224.56, stdev= 3.28, samples=9 00:27:17.633 lat (msec) : 10=0.27%, 20=99.73% 00:27:17.633 cpu : usr=92.56%, sys=6.88%, ctx=6, majf=0, minf=1073 00:27:17.633 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.633 filename0: (groupid=0, jobs=1): err= 0: pid=89627: Thu Dec 5 03:11:47 2024 00:27:17.633 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5004msec) 00:27:17.633 slat (nsec): min=8009, max=58795, avg=17686.24, stdev=6427.55 00:27:17.633 clat (usec): min=12642, max=19740, avg=13301.50, stdev=578.03 00:27:17.633 lat (usec): min=12651, max=19772, avg=13319.19, stdev=578.48 00:27:17.633 clat percentiles (usec): 00:27:17.633 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12911], 20.00th=[12911], 00:27:17.633 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:27:17.633 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:27:17.633 | 99.00th=[15008], 99.50th=[15270], 99.90th=[19792], 99.95th=[19792], 00:27:17.633 | 99.99th=[19792] 00:27:17.633 bw ( KiB/s): min=27648, max=29184, per=33.28%, avg=28757.33, stdev=557.94, samples=9 00:27:17.633 iops : min= 216, max= 228, avg=224.67, stdev= 4.36, samples=9 00:27:17.633 lat (msec) : 20=100.00% 00:27:17.633 cpu : usr=92.38%, sys=7.04%, ctx=11, majf=0, minf=1075 00:27:17.633 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.633 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:17.633 00:27:17.633 Run status group 0 (all jobs): 00:27:17.633 READ: bw=84.4MiB/s (88.5MB/s), 28.1MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=422MiB (443MB), run=5001-5004msec 00:27:17.892 ----------------------------------------------------- 00:27:17.892 Suppressions used: 00:27:17.892 count bytes template 00:27:17.892 5 44 /usr/src/fio/parse.c 00:27:17.892 1 8 libtcmalloc_minimal.so 00:27:17.892 1 904 libcrypto.so 00:27:17.892 ----------------------------------------------------- 00:27:17.892 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 bdev_null0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 [2024-12-05 03:11:48.602336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 bdev_null1 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:17.892 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.893 bdev_null2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.893 { 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme$subsystem", 00:27:17.893 "trtype": "$TEST_TRANSPORT", 00:27:17.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "$NVMF_PORT", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.893 "hdgst": ${hdgst:-false}, 00:27:17.893 "ddgst": ${ddgst:-false} 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 } 00:27:17.893 EOF 00:27:17.893 )") 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.893 { 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme$subsystem", 00:27:17.893 "trtype": "$TEST_TRANSPORT", 00:27:17.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "$NVMF_PORT", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.893 "hdgst": ${hdgst:-false}, 00:27:17.893 "ddgst": ${ddgst:-false} 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 } 00:27:17.893 EOF 00:27:17.893 )") 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.893 { 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme$subsystem", 00:27:17.893 "trtype": "$TEST_TRANSPORT", 00:27:17.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "$NVMF_PORT", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.893 "hdgst": ${hdgst:-false}, 00:27:17.893 "ddgst": ${ddgst:-false} 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 } 00:27:17.893 EOF 00:27:17.893 )") 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme0", 00:27:17.893 "trtype": "tcp", 00:27:17.893 "traddr": "10.0.0.3", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "4420", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:17.893 "hdgst": false, 00:27:17.893 "ddgst": false 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 },{ 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme1", 00:27:17.893 "trtype": "tcp", 00:27:17.893 "traddr": "10.0.0.3", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "4420", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.893 "hdgst": false, 00:27:17.893 "ddgst": false 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 },{ 00:27:17.893 "params": { 00:27:17.893 "name": "Nvme2", 00:27:17.893 "trtype": "tcp", 00:27:17.893 "traddr": "10.0.0.3", 00:27:17.893 "adrfam": "ipv4", 00:27:17.893 "trsvcid": "4420", 00:27:17.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:17.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:17.893 "hdgst": false, 00:27:17.893 "ddgst": false 00:27:17.893 }, 00:27:17.893 "method": "bdev_nvme_attach_controller" 00:27:17.893 }' 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:17.893 03:11:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.152 ... 00:27:18.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.152 ... 00:27:18.152 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.152 ... 00:27:18.152 fio-3.35 00:27:18.152 Starting 24 threads 00:27:30.363 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89726: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=190, BW=763KiB/s (781kB/s)(7684KiB/10072msec) 00:27:30.363 slat (nsec): min=5327, max=52319, avg=16066.78, stdev=6105.18 00:27:30.363 clat (msec): min=2, max=155, avg=83.64, stdev=27.01 00:27:30.363 lat (msec): min=2, max=155, avg=83.66, stdev=27.01 00:27:30.363 clat percentiles (msec): 00:27:30.363 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 54], 20.00th=[ 63], 00:27:30.363 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 94], 00:27:30.363 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 128], 00:27:30.363 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 157], 00:27:30.363 | 99.99th=[ 157] 00:27:30.363 bw ( KiB/s): min= 624, max= 1641, per=4.25%, avg=761.65, stdev=212.84, samples=20 00:27:30.363 iops : min= 156, max= 410, avg=190.40, stdev=53.16, samples=20 00:27:30.363 lat (msec) : 4=0.10%, 10=4.69%, 20=0.94%, 50=3.70%, 100=72.62% 00:27:30.363 lat (msec) : 250=17.96% 00:27:30.363 cpu : usr=31.94%, sys=1.89%, ctx=921, majf=0, minf=1075 00:27:30.363 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=77.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:27:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89727: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=188, BW=753KiB/s (771kB/s)(7560KiB/10038msec) 00:27:30.363 slat (usec): min=5, max=4029, avg=21.99, stdev=129.43 00:27:30.363 clat (msec): min=43, max=153, avg=84.72, stdev=19.16 00:27:30.363 lat (msec): min=43, max=153, avg=84.74, stdev=19.16 00:27:30.363 clat percentiles (msec): 00:27:30.363 | 1.00th=[ 50], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:27:30.363 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 91], 00:27:30.363 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:27:30.363 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:27:30.363 | 99.99th=[ 155] 00:27:30.363 bw ( KiB/s): min= 576, max= 864, per=4.18%, avg=749.42, stdev=68.27, samples=19 00:27:30.363 iops : min= 144, max= 216, avg=187.32, stdev=17.05, samples=19 00:27:30.363 lat (msec) : 50=1.01%, 100=83.60%, 250=15.40% 00:27:30.363 cpu : usr=41.19%, sys=2.15%, ctx=1609, majf=0, minf=1061 00:27:30.363 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=75.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89728: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=183, BW=735KiB/s (752kB/s)(7364KiB/10021msec) 00:27:30.363 slat (usec): min=5, max=4040, avg=21.67, stdev=132.56 00:27:30.363 clat (msec): min=31, max=176, avg=86.96, stdev=22.04 00:27:30.363 lat (msec): min=31, max=176, avg=86.98, stdev=22.04 00:27:30.363 clat percentiles (msec): 00:27:30.363 | 1.00th=[ 46], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 66], 00:27:30.363 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 92], 00:27:30.363 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 132], 00:27:30.363 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 176], 00:27:30.363 | 99.99th=[ 176] 00:27:30.363 bw ( KiB/s): min= 512, max= 896, per=4.03%, avg=722.95, stdev=95.67, samples=19 00:27:30.363 iops : min= 128, max= 224, avg=180.74, stdev=23.92, samples=19 00:27:30.363 lat (msec) : 50=1.74%, 100=77.95%, 250=20.32% 00:27:30.363 cpu : usr=41.62%, sys=2.32%, ctx=1444, majf=0, minf=1074 00:27:30.363 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 issued rwts: total=1841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89729: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=188, BW=755KiB/s (773kB/s)(7556KiB/10009msec) 00:27:30.363 slat (usec): min=5, max=8034, avg=28.59, stdev=276.78 00:27:30.363 clat (msec): min=9, max=151, avg=84.61, stdev=21.75 00:27:30.363 lat (msec): min=9, max=151, avg=84.64, stdev=21.74 00:27:30.363 clat percentiles (msec): 00:27:30.363 | 1.00th=[ 16], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 63], 00:27:30.363 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 92], 00:27:30.363 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 129], 00:27:30.363 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:27:30.363 | 99.99th=[ 153] 00:27:30.363 bw ( KiB/s): min= 520, max= 864, per=4.13%, avg=740.11, stdev=84.45, samples=19 00:27:30.363 iops : min= 130, max= 216, avg=184.95, stdev=21.17, samples=19 00:27:30.363 lat (msec) : 10=0.16%, 20=0.85%, 50=3.39%, 100=79.30%, 250=16.30% 00:27:30.363 cpu : usr=34.15%, sys=2.12%, ctx=965, majf=0, minf=1071 00:27:30.363 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=75.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89730: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=184, BW=739KiB/s (757kB/s)(7404KiB/10014msec) 00:27:30.363 slat (usec): min=5, max=4045, avg=31.36, stdev=228.00 00:27:30.363 clat (msec): min=4, max=188, avg=86.34, stdev=24.04 00:27:30.363 lat (msec): min=4, max=189, avg=86.37, stdev=24.04 00:27:30.363 clat percentiles (msec): 00:27:30.363 | 1.00th=[ 13], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 66], 00:27:30.363 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 92], 00:27:30.363 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 130], 00:27:30.363 | 99.00th=[ 142], 99.50th=[ 161], 99.90th=[ 190], 99.95th=[ 190], 00:27:30.363 | 99.99th=[ 190] 00:27:30.363 bw ( KiB/s): min= 512, max= 816, per=3.98%, avg=713.68, stdev=76.11, samples=19 00:27:30.363 iops : min= 128, max= 204, avg=178.42, stdev=19.03, samples=19 00:27:30.363 lat (msec) : 10=0.38%, 20=1.30%, 50=1.89%, 100=77.47%, 250=18.96% 00:27:30.363 cpu : usr=40.19%, sys=2.39%, ctx=1333, majf=0, minf=1071 00:27:30.363 IO depths : 1=0.1%, 2=2.7%, 4=10.8%, 8=72.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:27:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 complete : 0=0.0%, 4=89.9%, 8=7.7%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.363 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=89731: Thu Dec 5 03:12:00 2024 00:27:30.363 read: IOPS=189, BW=756KiB/s (775kB/s)(7612KiB/10064msec) 00:27:30.363 slat (usec): min=5, max=8041, avg=32.34, stdev=331.34 00:27:30.363 clat (msec): min=33, max=144, avg=84.35, stdev=20.88 00:27:30.363 lat (msec): min=33, max=144, avg=84.38, stdev=20.87 00:27:30.363 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 37], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 64], 00:27:30.364 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 91], 00:27:30.364 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 123], 00:27:30.364 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 144], 00:27:30.364 | 99.99th=[ 144] 00:27:30.364 bw ( KiB/s): min= 656, max= 880, per=4.20%, avg=753.26, stdev=65.77, samples=19 00:27:30.364 iops : min= 164, max= 220, avg=188.32, stdev=16.44, samples=19 00:27:30.364 lat (msec) : 50=4.52%, 100=79.03%, 250=16.45% 00:27:30.364 cpu : usr=35.24%, sys=2.34%, ctx=1089, majf=0, minf=1075 00:27:30.364 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename0: (groupid=0, jobs=1): err= 0: pid=89732: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=167, BW=669KiB/s (685kB/s)(6720KiB/10040msec) 00:27:30.364 slat (usec): min=4, max=8040, avg=21.14, stdev=195.89 00:27:30.364 clat (msec): min=4, max=174, avg=95.26, stdev=28.45 00:27:30.364 lat (msec): min=4, max=174, avg=95.28, stdev=28.45 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 75], 20.00th=[ 84], 00:27:30.364 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 96], 00:27:30.364 | 70.00th=[ 106], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 134], 00:27:30.364 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 176], 99.95th=[ 176], 00:27:30.364 | 99.99th=[ 176] 00:27:30.364 bw ( KiB/s): min= 528, max= 1520, per=3.71%, avg=665.55, stdev=211.54, samples=20 00:27:30.364 iops : min= 132, max= 380, avg=166.30, stdev=52.91, samples=20 00:27:30.364 lat (msec) : 10=3.81%, 20=1.90%, 50=0.95%, 100=59.76%, 250=33.57% 00:27:30.364 cpu : usr=34.95%, sys=2.11%, ctx=965, majf=0, minf=1062 00:27:30.364 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename0: (groupid=0, jobs=1): err= 0: pid=89733: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=188, BW=755KiB/s (773kB/s)(7584KiB/10045msec) 00:27:30.364 slat (usec): min=5, max=8038, avg=34.05, stdev=326.07 00:27:30.364 clat (msec): min=35, max=156, avg=84.44, stdev=19.81 00:27:30.364 lat (msec): min=36, max=156, avg=84.48, stdev=19.81 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 65], 00:27:30.364 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 90], 00:27:30.364 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 120], 00:27:30.364 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 157], 00:27:30.364 | 99.99th=[ 157] 00:27:30.364 bw ( KiB/s): min= 656, max= 968, per=4.19%, avg=750.11, stdev=67.48, samples=19 00:27:30.364 iops : min= 164, max= 242, avg=187.47, stdev=16.86, samples=19 00:27:30.364 lat (msec) : 50=2.95%, 100=80.49%, 250=16.56% 00:27:30.364 cpu : usr=38.92%, sys=2.08%, ctx=1161, majf=0, minf=1074 00:27:30.364 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename1: (groupid=0, jobs=1): err= 0: pid=89734: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=188, BW=752KiB/s (771kB/s)(7548KiB/10031msec) 00:27:30.364 slat (usec): min=4, max=8035, avg=30.56, stdev=319.52 00:27:30.364 clat (msec): min=36, max=168, avg=84.83, stdev=20.65 00:27:30.364 lat (msec): min=36, max=168, avg=84.86, stdev=20.65 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 64], 00:27:30.364 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 88], 00:27:30.364 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:27:30.364 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:27:30.364 | 99.99th=[ 169] 00:27:30.364 bw ( KiB/s): min= 576, max= 920, per=4.16%, avg=746.11, stdev=83.98, samples=19 00:27:30.364 iops : min= 144, max= 230, avg=186.53, stdev=21.00, samples=19 00:27:30.364 lat (msec) : 50=2.86%, 100=80.71%, 250=16.43% 00:27:30.364 cpu : usr=31.57%, sys=1.77%, ctx=887, majf=0, minf=1073 00:27:30.364 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename1: (groupid=0, jobs=1): err= 0: pid=89735: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=182, BW=729KiB/s (746kB/s)(7304KiB/10024msec) 00:27:30.364 slat (usec): min=5, max=4036, avg=24.51, stdev=162.83 00:27:30.364 clat (msec): min=43, max=194, avg=87.64, stdev=21.97 00:27:30.364 lat (msec): min=43, max=194, avg=87.66, stdev=21.98 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 66], 00:27:30.364 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 94], 00:27:30.364 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 114], 95.00th=[ 125], 00:27:30.364 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 194], 00:27:30.364 | 99.99th=[ 194] 00:27:30.364 bw ( KiB/s): min= 512, max= 824, per=4.02%, avg=720.42, stdev=91.23, samples=19 00:27:30.364 iops : min= 128, max= 206, avg=180.11, stdev=22.81, samples=19 00:27:30.364 lat (msec) : 50=1.59%, 100=78.75%, 250=19.66% 00:27:30.364 cpu : usr=38.69%, sys=2.21%, ctx=1239, majf=0, minf=1072 00:27:30.364 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=73.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=89.4%, 8=8.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename1: (groupid=0, jobs=1): err= 0: pid=89736: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=191, BW=767KiB/s (785kB/s)(7720KiB/10065msec) 00:27:30.364 slat (usec): min=5, max=8038, avg=37.52, stdev=407.73 00:27:30.364 clat (msec): min=3, max=155, avg=83.10, stdev=26.16 00:27:30.364 lat (msec): min=3, max=155, avg=83.14, stdev=26.16 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 62], 00:27:30.364 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 95], 00:27:30.364 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 121], 00:27:30.364 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 157], 00:27:30.364 | 99.99th=[ 157] 00:27:30.364 bw ( KiB/s): min= 617, max= 1520, per=4.27%, avg=765.65, stdev=189.67, samples=20 00:27:30.364 iops : min= 154, max= 380, avg=191.35, stdev=47.45, samples=20 00:27:30.364 lat (msec) : 4=0.10%, 10=3.21%, 20=1.66%, 50=3.73%, 100=73.73% 00:27:30.364 lat (msec) : 250=17.56% 00:27:30.364 cpu : usr=31.61%, sys=1.93%, ctx=887, majf=0, minf=1074 00:27:30.364 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=79.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:27:30.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 complete : 0=0.0%, 4=88.7%, 8=10.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.364 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.364 filename1: (groupid=0, jobs=1): err= 0: pid=89737: Thu Dec 5 03:12:00 2024 00:27:30.364 read: IOPS=192, BW=770KiB/s (789kB/s)(7744KiB/10055msec) 00:27:30.364 slat (usec): min=5, max=7033, avg=23.56, stdev=196.07 00:27:30.364 clat (msec): min=34, max=151, avg=82.85, stdev=20.31 00:27:30.364 lat (msec): min=34, max=151, avg=82.87, stdev=20.31 00:27:30.364 clat percentiles (msec): 00:27:30.364 | 1.00th=[ 43], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 64], 00:27:30.364 | 30.00th=[ 70], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 90], 00:27:30.364 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 123], 00:27:30.364 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 153], 00:27:30.364 | 99.99th=[ 153] 00:27:30.364 bw ( KiB/s): min= 696, max= 873, per=4.29%, avg=769.70, stdev=52.50, samples=20 00:27:30.365 iops : min= 174, max= 218, avg=192.40, stdev=13.10, samples=20 00:27:30.365 lat (msec) : 50=4.44%, 100=80.68%, 250=14.88% 00:27:30.365 cpu : usr=39.72%, sys=2.23%, ctx=1513, majf=0, minf=1072 00:27:30.365 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename1: (groupid=0, jobs=1): err= 0: pid=89738: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=195, BW=782KiB/s (801kB/s)(7828KiB/10012msec) 00:27:30.365 slat (usec): min=5, max=6851, avg=26.75, stdev=200.13 00:27:30.365 clat (msec): min=15, max=215, avg=81.70, stdev=21.95 00:27:30.365 lat (msec): min=15, max=215, avg=81.73, stdev=21.95 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 37], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 63], 00:27:30.365 | 30.00th=[ 67], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 88], 00:27:30.365 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 123], 00:27:30.365 | 99.00th=[ 142], 99.50th=[ 163], 99.90th=[ 215], 99.95th=[ 215], 00:27:30.365 | 99.99th=[ 215] 00:27:30.365 bw ( KiB/s): min= 512, max= 872, per=4.30%, avg=770.11, stdev=78.83, samples=19 00:27:30.365 iops : min= 128, max= 218, avg=192.53, stdev=19.71, samples=19 00:27:30.365 lat (msec) : 20=0.36%, 50=3.83%, 100=80.99%, 250=14.82% 00:27:30.365 cpu : usr=39.71%, sys=2.68%, ctx=1781, majf=0, minf=1075 00:27:30.365 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename1: (groupid=0, jobs=1): err= 0: pid=89739: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=188, BW=753KiB/s (771kB/s)(7576KiB/10060msec) 00:27:30.365 slat (usec): min=5, max=8040, avg=40.68, stdev=421.57 00:27:30.365 clat (msec): min=35, max=168, avg=84.71, stdev=20.15 00:27:30.365 lat (msec): min=35, max=168, avg=84.75, stdev=20.15 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 46], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 65], 00:27:30.365 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 90], 00:27:30.365 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:27:30.365 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:27:30.365 | 99.99th=[ 169] 00:27:30.365 bw ( KiB/s): min= 656, max= 896, per=4.18%, avg=749.11, stdev=65.55, samples=19 00:27:30.365 iops : min= 164, max= 224, avg=187.26, stdev=16.39, samples=19 00:27:30.365 lat (msec) : 50=3.80%, 100=81.68%, 250=14.52% 00:27:30.365 cpu : usr=35.05%, sys=1.92%, ctx=1073, majf=0, minf=1073 00:27:30.365 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename1: (groupid=0, jobs=1): err= 0: pid=89740: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=196, BW=785KiB/s (804kB/s)(7872KiB/10026msec) 00:27:30.365 slat (usec): min=5, max=8032, avg=24.89, stdev=255.52 00:27:30.365 clat (msec): min=31, max=180, avg=81.38, stdev=21.96 00:27:30.365 lat (msec): min=31, max=180, avg=81.40, stdev=21.96 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 61], 00:27:30.365 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 85], 00:27:30.365 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:30.365 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 182], 00:27:30.365 | 99.99th=[ 182] 00:27:30.365 bw ( KiB/s): min= 576, max= 920, per=4.33%, avg=776.00, stdev=76.69, samples=19 00:27:30.365 iops : min= 144, max= 230, avg=194.00, stdev=19.17, samples=19 00:27:30.365 lat (msec) : 50=7.37%, 100=80.08%, 250=12.55% 00:27:30.365 cpu : usr=31.62%, sys=2.02%, ctx=919, majf=0, minf=1074 00:27:30.365 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename1: (groupid=0, jobs=1): err= 0: pid=89741: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=175, BW=703KiB/s (720kB/s)(7060KiB/10041msec) 00:27:30.365 slat (usec): min=5, max=8044, avg=25.07, stdev=215.04 00:27:30.365 clat (msec): min=48, max=146, avg=90.72, stdev=19.95 00:27:30.365 lat (msec): min=48, max=146, avg=90.75, stdev=19.95 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 55], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 72], 00:27:30.365 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 95], 00:27:30.365 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 120], 95.00th=[ 130], 00:27:30.365 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 148], 00:27:30.365 | 99.99th=[ 148] 00:27:30.365 bw ( KiB/s): min= 512, max= 816, per=3.88%, avg=696.53, stdev=79.88, samples=19 00:27:30.365 iops : min= 128, max= 204, avg=174.11, stdev=19.94, samples=19 00:27:30.365 lat (msec) : 50=0.57%, 100=75.81%, 250=23.63% 00:27:30.365 cpu : usr=42.00%, sys=2.31%, ctx=1247, majf=0, minf=1074 00:27:30.365 IO depths : 1=0.1%, 2=3.4%, 4=13.5%, 8=69.1%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=90.8%, 8=6.3%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename2: (groupid=0, jobs=1): err= 0: pid=89742: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=165, BW=661KiB/s (677kB/s)(6612KiB/10006msec) 00:27:30.365 slat (usec): min=4, max=8022, avg=21.12, stdev=197.44 00:27:30.365 clat (msec): min=28, max=184, avg=96.72, stdev=19.43 00:27:30.365 lat (msec): min=28, max=184, avg=96.74, stdev=19.43 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 48], 5.00th=[ 72], 10.00th=[ 81], 20.00th=[ 85], 00:27:30.365 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 95], 60.00th=[ 96], 00:27:30.365 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 133], 00:27:30.365 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:27:30.365 | 99.99th=[ 184] 00:27:30.365 bw ( KiB/s): min= 512, max= 768, per=3.61%, avg=646.74, stdev=72.99, samples=19 00:27:30.365 iops : min= 128, max= 192, avg=161.68, stdev=18.25, samples=19 00:27:30.365 lat (msec) : 50=1.33%, 100=71.87%, 250=26.80% 00:27:30.365 cpu : usr=37.77%, sys=2.18%, ctx=1086, majf=0, minf=1074 00:27:30.365 IO depths : 1=0.1%, 2=5.9%, 4=23.7%, 8=57.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename2: (groupid=0, jobs=1): err= 0: pid=89743: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10008msec) 00:27:30.365 slat (usec): min=5, max=8033, avg=24.16, stdev=202.56 00:27:30.365 clat (msec): min=8, max=202, avg=81.58, stdev=22.76 00:27:30.365 lat (msec): min=8, max=202, avg=81.60, stdev=22.76 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:27:30.365 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 87], 00:27:30.365 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:30.365 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 203], 99.95th=[ 203], 00:27:30.365 | 99.99th=[ 203] 00:27:30.365 bw ( KiB/s): min= 620, max= 896, per=4.27%, avg=766.63, stdev=75.44, samples=19 00:27:30.365 iops : min= 155, max= 224, avg=191.63, stdev=18.82, samples=19 00:27:30.365 lat (msec) : 10=0.15%, 20=0.82%, 50=4.69%, 100=80.05%, 250=14.29% 00:27:30.365 cpu : usr=34.73%, sys=2.41%, ctx=1029, majf=0, minf=1073 00:27:30.365 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename2: (groupid=0, jobs=1): err= 0: pid=89744: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=173, BW=693KiB/s (709kB/s)(6968KiB/10060msec) 00:27:30.365 slat (usec): min=5, max=12032, avg=37.21, stdev=439.86 00:27:30.365 clat (msec): min=47, max=191, avg=92.08, stdev=21.56 00:27:30.365 lat (msec): min=47, max=191, avg=92.12, stdev=21.57 00:27:30.365 clat percentiles (msec): 00:27:30.365 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 75], 00:27:30.365 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 95], 00:27:30.365 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 133], 00:27:30.365 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:27:30.365 | 99.99th=[ 192] 00:27:30.365 bw ( KiB/s): min= 512, max= 784, per=3.83%, avg=686.79, stdev=87.53, samples=19 00:27:30.365 iops : min= 128, max= 196, avg=171.68, stdev=21.87, samples=19 00:27:30.365 lat (msec) : 50=0.52%, 100=78.82%, 250=20.67% 00:27:30.365 cpu : usr=31.45%, sys=2.01%, ctx=899, majf=0, minf=1072 00:27:30.365 IO depths : 1=0.1%, 2=2.9%, 4=11.7%, 8=70.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 complete : 0=0.0%, 4=90.6%, 8=6.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.365 issued rwts: total=1742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.365 filename2: (groupid=0, jobs=1): err= 0: pid=89745: Thu Dec 5 03:12:00 2024 00:27:30.365 read: IOPS=193, BW=774KiB/s (793kB/s)(7748KiB/10006msec) 00:27:30.365 slat (usec): min=4, max=8031, avg=25.42, stdev=257.53 00:27:30.365 clat (msec): min=4, max=212, avg=82.50, stdev=24.68 00:27:30.365 lat (msec): min=4, max=212, avg=82.53, stdev=24.68 00:27:30.366 clat percentiles (msec): 00:27:30.366 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 61], 00:27:30.366 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:27:30.366 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:27:30.366 | 99.00th=[ 144], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:27:30.366 | 99.99th=[ 213] 00:27:30.366 bw ( KiB/s): min= 512, max= 872, per=4.16%, avg=745.68, stdev=78.77, samples=19 00:27:30.366 iops : min= 128, max= 218, avg=186.42, stdev=19.69, samples=19 00:27:30.366 lat (msec) : 10=0.77%, 20=1.29%, 50=5.63%, 100=78.32%, 250=13.99% 00:27:30.366 cpu : usr=31.54%, sys=1.96%, ctx=863, majf=0, minf=1074 00:27:30.366 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=77.4%, 16=14.9%, 32=0.0%, >=64=0.0% 00:27:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 complete : 0=0.0%, 4=88.4%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.366 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.366 filename2: (groupid=0, jobs=1): err= 0: pid=89746: Thu Dec 5 03:12:00 2024 00:27:30.366 read: IOPS=182, BW=730KiB/s (748kB/s)(7316KiB/10018msec) 00:27:30.366 slat (usec): min=4, max=8033, avg=33.09, stdev=310.69 00:27:30.366 clat (msec): min=4, max=190, avg=87.43, stdev=23.63 00:27:30.366 lat (msec): min=4, max=190, avg=87.47, stdev=23.63 00:27:30.366 clat percentiles (msec): 00:27:30.366 | 1.00th=[ 11], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 69], 00:27:30.366 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 92], 00:27:30.366 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 115], 95.00th=[ 131], 00:27:30.366 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 190], 00:27:30.366 | 99.99th=[ 190] 00:27:30.366 bw ( KiB/s): min= 528, max= 824, per=3.93%, avg=704.00, stdev=83.35, samples=19 00:27:30.366 iops : min= 132, max= 206, avg=176.00, stdev=20.84, samples=19 00:27:30.366 lat (msec) : 10=0.71%, 20=1.15%, 50=1.31%, 100=76.76%, 250=20.07% 00:27:30.366 cpu : usr=42.87%, sys=2.80%, ctx=1122, majf=0, minf=1073 00:27:30.366 IO depths : 1=0.1%, 2=3.3%, 4=13.3%, 8=69.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 complete : 0=0.0%, 4=90.6%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.366 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.366 filename2: (groupid=0, jobs=1): err= 0: pid=89747: Thu Dec 5 03:12:00 2024 00:27:30.366 read: IOPS=179, BW=716KiB/s (733kB/s)(7212KiB/10070msec) 00:27:30.366 slat (nsec): min=5378, max=59786, avg=16920.78, stdev=6479.48 00:27:30.366 clat (msec): min=9, max=155, avg=89.08, stdev=23.62 00:27:30.366 lat (msec): min=9, max=155, avg=89.10, stdev=23.62 00:27:30.366 clat percentiles (msec): 00:27:30.366 | 1.00th=[ 10], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 75], 00:27:30.366 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 96], 00:27:30.366 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 120], 95.00th=[ 127], 00:27:30.366 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 157], 00:27:30.366 | 99.99th=[ 157] 00:27:30.366 bw ( KiB/s): min= 520, max= 1248, per=3.98%, avg=714.80, stdev=141.93, samples=20 00:27:30.366 iops : min= 130, max= 312, avg=178.70, stdev=35.48, samples=20 00:27:30.366 lat (msec) : 10=1.55%, 20=1.77%, 50=1.39%, 100=70.22%, 250=25.07% 00:27:30.366 cpu : usr=35.55%, sys=2.01%, ctx=1023, majf=0, minf=1072 00:27:30.366 IO depths : 1=0.1%, 2=2.9%, 4=11.3%, 8=70.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:27:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 issued rwts: total=1803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.366 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.366 filename2: (groupid=0, jobs=1): err= 0: pid=89748: Thu Dec 5 03:12:00 2024 00:27:30.366 read: IOPS=234, BW=938KiB/s (961kB/s)(9460KiB/10082msec) 00:27:30.366 slat (usec): min=4, max=7339, avg=23.87, stdev=214.51 00:27:30.366 clat (usec): min=1859, max=154821, avg=67909.71, stdev=39667.46 00:27:30.366 lat (usec): min=1868, max=154832, avg=67933.58, stdev=39676.50 00:27:30.366 clat percentiles (usec): 00:27:30.366 | 1.00th=[ 1926], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 8586], 00:27:30.366 | 30.00th=[ 60031], 40.00th=[ 68682], 50.00th=[ 81265], 60.00th=[ 87557], 00:27:30.366 | 70.00th=[ 92799], 80.00th=[ 96994], 90.00th=[108528], 95.00th=[121111], 00:27:30.366 | 99.00th=[135267], 99.50th=[137364], 99.90th=[152044], 99.95th=[152044], 00:27:30.366 | 99.99th=[154141] 00:27:30.366 bw ( KiB/s): min= 632, max= 5073, per=5.23%, avg=938.05, stdev=975.14, samples=20 00:27:30.366 iops : min= 158, max= 1268, avg=234.50, stdev=243.73, samples=20 00:27:30.366 lat (msec) : 2=5.79%, 4=11.80%, 10=4.36%, 20=1.73%, 50=0.59% 00:27:30.366 lat (msec) : 100=60.42%, 250=15.31% 00:27:30.366 cpu : usr=41.70%, sys=2.50%, ctx=1638, majf=0, minf=1073 00:27:30.366 IO depths : 1=1.1%, 2=4.0%, 4=11.6%, 8=69.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 complete : 0=0.0%, 4=90.4%, 8=7.0%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.366 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.366 filename2: (groupid=0, jobs=1): err= 0: pid=89749: Thu Dec 5 03:12:00 2024 00:27:30.366 read: IOPS=184, BW=736KiB/s (754kB/s)(7368KiB/10009msec) 00:27:30.366 slat (usec): min=5, max=8038, avg=31.87, stdev=305.31 00:27:30.366 clat (msec): min=31, max=163, avg=86.76, stdev=20.28 00:27:30.366 lat (msec): min=31, max=163, avg=86.79, stdev=20.27 00:27:30.366 clat percentiles (msec): 00:27:30.366 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 69], 00:27:30.366 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 92], 00:27:30.366 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 127], 00:27:30.366 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:27:30.366 | 99.99th=[ 163] 00:27:30.366 bw ( KiB/s): min= 624, max= 864, per=4.03%, avg=722.95, stdev=71.42, samples=19 00:27:30.366 iops : min= 156, max= 216, avg=180.74, stdev=17.85, samples=19 00:27:30.366 lat (msec) : 50=1.95%, 100=78.50%, 250=19.54% 00:27:30.366 cpu : usr=38.00%, sys=2.26%, ctx=1071, majf=0, minf=1072 00:27:30.366 IO depths : 1=0.1%, 2=2.4%, 4=9.7%, 8=73.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:27:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.366 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.366 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:30.366 00:27:30.366 Run status group 0 (all jobs): 00:27:30.366 READ: bw=17.5MiB/s (18.3MB/s), 661KiB/s-938KiB/s (677kB/s-961kB/s), io=176MiB (185MB), run=10006-10082msec 00:27:30.625 ----------------------------------------------------- 00:27:30.625 Suppressions used: 00:27:30.625 count bytes template 00:27:30.625 45 402 /usr/src/fio/parse.c 00:27:30.625 1 8 libtcmalloc_minimal.so 00:27:30.625 1 904 libcrypto.so 00:27:30.625 ----------------------------------------------------- 00:27:30.625 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.625 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 bdev_null0 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 [2024-12-05 03:12:01.306736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 bdev_null1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.626 { 00:27:30.626 "params": { 00:27:30.626 "name": "Nvme$subsystem", 00:27:30.626 "trtype": "$TEST_TRANSPORT", 00:27:30.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.626 "adrfam": "ipv4", 00:27:30.626 "trsvcid": "$NVMF_PORT", 00:27:30.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.626 "hdgst": ${hdgst:-false}, 00:27:30.626 "ddgst": ${ddgst:-false} 00:27:30.626 }, 00:27:30.626 "method": "bdev_nvme_attach_controller" 00:27:30.626 } 00:27:30.626 EOF 00:27:30.626 )") 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.626 { 00:27:30.626 "params": { 00:27:30.626 "name": "Nvme$subsystem", 00:27:30.626 "trtype": "$TEST_TRANSPORT", 00:27:30.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.626 "adrfam": "ipv4", 00:27:30.626 "trsvcid": "$NVMF_PORT", 00:27:30.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.626 "hdgst": ${hdgst:-false}, 00:27:30.626 "ddgst": ${ddgst:-false} 00:27:30.626 }, 00:27:30.626 "method": "bdev_nvme_attach_controller" 00:27:30.626 } 00:27:30.626 EOF 00:27:30.626 )") 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:30.626 03:12:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:30.626 "params": { 00:27:30.626 "name": "Nvme0", 00:27:30.626 "trtype": "tcp", 00:27:30.626 "traddr": "10.0.0.3", 00:27:30.626 "adrfam": "ipv4", 00:27:30.626 "trsvcid": "4420", 00:27:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.627 "hdgst": false, 00:27:30.627 "ddgst": false 00:27:30.627 }, 00:27:30.627 "method": "bdev_nvme_attach_controller" 00:27:30.627 },{ 00:27:30.627 "params": { 00:27:30.627 "name": "Nvme1", 00:27:30.627 "trtype": "tcp", 00:27:30.627 "traddr": "10.0.0.3", 00:27:30.627 "adrfam": "ipv4", 00:27:30.627 "trsvcid": "4420", 00:27:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.627 "hdgst": false, 00:27:30.627 "ddgst": false 00:27:30.627 }, 00:27:30.627 "method": "bdev_nvme_attach_controller" 00:27:30.627 }' 00:27:30.627 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:30.627 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:30.627 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:30.627 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:30.627 03:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.884 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:30.884 ... 00:27:30.884 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:30.884 ... 00:27:30.884 fio-3.35 00:27:30.884 Starting 4 threads 00:27:37.443 00:27:37.443 filename0: (groupid=0, jobs=1): err= 0: pid=89888: Thu Dec 5 03:12:07 2024 00:27:37.443 read: IOPS=1630, BW=12.7MiB/s (13.4MB/s)(63.7MiB/5003msec) 00:27:37.443 slat (nsec): min=5409, max=76555, avg=16887.23, stdev=5299.71 00:27:37.443 clat (usec): min=1511, max=8362, avg=4839.03, stdev=535.80 00:27:37.443 lat (usec): min=1525, max=8384, avg=4855.92, stdev=535.77 00:27:37.443 clat percentiles (usec): 00:27:37.443 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4359], 20.00th=[ 4424], 00:27:37.443 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 5014], 00:27:37.443 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5604], 00:27:37.443 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 8029], 99.95th=[ 8094], 00:27:37.443 | 99.99th=[ 8356] 00:27:37.443 bw ( KiB/s): min=11648, max=14336, per=23.43%, avg=13085.67, stdev=989.99, samples=9 00:27:37.443 iops : min= 1456, max= 1792, avg=1635.67, stdev=123.71, samples=9 00:27:37.443 lat (msec) : 2=0.05%, 4=2.43%, 10=97.52% 00:27:37.443 cpu : usr=92.24%, sys=6.84%, ctx=7, majf=0, minf=1073 00:27:37.443 IO depths : 1=0.1%, 2=23.9%, 4=50.7%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 issued rwts: total=8159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.443 filename0: (groupid=0, jobs=1): err= 0: pid=89889: Thu Dec 5 03:12:07 2024 00:27:37.443 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5002msec) 00:27:37.443 slat (nsec): min=5522, max=72917, avg=15182.14, stdev=5927.51 00:27:37.443 clat (usec): min=1021, max=8296, avg=4228.38, stdev=941.55 00:27:37.443 lat (usec): min=1030, max=8317, avg=4243.56, stdev=941.87 00:27:37.443 clat percentiles (usec): 00:27:37.443 | 1.00th=[ 1598], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 3097], 00:27:37.443 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4621], 00:27:37.443 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5276], 00:27:37.443 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6783], 00:27:37.443 | 99.99th=[ 8291] 00:27:37.443 bw ( KiB/s): min=13696, max=16496, per=26.65%, avg=14878.56, stdev=1159.30, samples=9 00:27:37.443 iops : min= 1712, max= 2062, avg=1859.78, stdev=144.95, samples=9 00:27:37.443 lat (msec) : 2=1.46%, 4=24.05%, 10=74.49% 00:27:37.443 cpu : usr=91.70%, sys=7.24%, ctx=10, majf=0, minf=1074 00:27:37.443 IO depths : 1=0.1%, 2=12.6%, 4=56.9%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 issued rwts: total=9356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.443 filename1: (groupid=0, jobs=1): err= 0: pid=89890: Thu Dec 5 03:12:07 2024 00:27:37.443 read: IOPS=1848, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:27:37.443 slat (nsec): min=5427, max=76959, avg=14492.41, stdev=6164.71 00:27:37.443 clat (usec): min=1024, max=8056, avg=4279.03, stdev=965.53 00:27:37.443 lat (usec): min=1033, max=8085, avg=4293.52, stdev=964.59 00:27:37.443 clat percentiles (usec): 00:27:37.443 | 1.00th=[ 1483], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 3326], 00:27:37.443 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4621], 00:27:37.443 | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5407], 00:27:37.443 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 7701], 00:27:37.443 | 99.99th=[ 8029] 00:27:37.443 bw ( KiB/s): min=12032, max=16544, per=26.27%, avg=14670.22, stdev=1498.32, samples=9 00:27:37.443 iops : min= 1504, max= 2068, avg=1833.78, stdev=187.29, samples=9 00:27:37.443 lat (msec) : 2=1.43%, 4=22.46%, 10=76.11% 00:27:37.443 cpu : usr=92.22%, sys=6.74%, ctx=13, majf=0, minf=1075 00:27:37.443 IO depths : 1=0.1%, 2=13.5%, 4=56.4%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 issued rwts: total=9244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.443 filename1: (groupid=0, jobs=1): err= 0: pid=89891: Thu Dec 5 03:12:07 2024 00:27:37.443 read: IOPS=1631, BW=12.7MiB/s (13.4MB/s)(63.8MiB/5004msec) 00:27:37.443 slat (nsec): min=5483, max=60771, avg=16831.33, stdev=5394.84 00:27:37.443 clat (usec): min=1500, max=8325, avg=4836.08, stdev=525.69 00:27:37.443 lat (usec): min=1514, max=8345, avg=4852.91, stdev=525.70 00:27:37.443 clat percentiles (usec): 00:27:37.443 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4359], 20.00th=[ 4424], 00:27:37.443 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 5014], 00:27:37.443 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5604], 00:27:37.443 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 7308], 99.95th=[ 7373], 00:27:37.443 | 99.99th=[ 8356] 00:27:37.443 bw ( KiB/s): min=11648, max=14336, per=23.45%, avg=13096.89, stdev=1000.07, samples=9 00:27:37.443 iops : min= 1456, max= 1792, avg=1637.11, stdev=125.01, samples=9 00:27:37.443 lat (msec) : 2=0.05%, 4=2.42%, 10=97.53% 00:27:37.443 cpu : usr=93.04%, sys=6.06%, ctx=6, majf=0, minf=1075 00:27:37.443 IO depths : 1=0.1%, 2=23.9%, 4=50.7%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.443 issued rwts: total=8166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.443 00:27:37.443 Run status group 0 (all jobs): 00:27:37.443 READ: bw=54.5MiB/s (57.2MB/s), 12.7MiB/s-14.6MiB/s (13.4MB/s-15.3MB/s), io=273MiB (286MB), run=5002-5004msec 00:27:38.011 ----------------------------------------------------- 00:27:38.011 Suppressions used: 00:27:38.011 count bytes template 00:27:38.011 6 52 /usr/src/fio/parse.c 00:27:38.011 1 8 libtcmalloc_minimal.so 00:27:38.011 1 904 libcrypto.so 00:27:38.011 ----------------------------------------------------- 00:27:38.011 00:27:38.011 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 ************************************ 00:27:38.012 END TEST fio_dif_rand_params 00:27:38.012 ************************************ 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 00:27:38.012 real 0m27.249s 00:27:38.012 user 2m7.299s 00:27:38.012 sys 0m8.778s 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:38.012 03:12:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:38.012 03:12:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 ************************************ 00:27:38.012 START TEST fio_dif_digest 00:27:38.012 ************************************ 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 bdev_null0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.012 [2024-12-05 03:12:08.706514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:38.012 { 00:27:38.012 "params": { 00:27:38.012 "name": "Nvme$subsystem", 00:27:38.012 "trtype": "$TEST_TRANSPORT", 00:27:38.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.012 "adrfam": "ipv4", 00:27:38.012 "trsvcid": "$NVMF_PORT", 00:27:38.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.012 "hdgst": ${hdgst:-false}, 00:27:38.012 "ddgst": ${ddgst:-false} 00:27:38.012 }, 00:27:38.012 "method": "bdev_nvme_attach_controller" 00:27:38.012 } 00:27:38.012 EOF 00:27:38.012 )") 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:38.012 "params": { 00:27:38.012 "name": "Nvme0", 00:27:38.012 "trtype": "tcp", 00:27:38.012 "traddr": "10.0.0.3", 00:27:38.012 "adrfam": "ipv4", 00:27:38.012 "trsvcid": "4420", 00:27:38.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.012 "hdgst": true, 00:27:38.012 "ddgst": true 00:27:38.012 }, 00:27:38.012 "method": "bdev_nvme_attach_controller" 00:27:38.012 }' 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:38.012 03:12:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.271 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:38.271 ... 00:27:38.271 fio-3.35 00:27:38.271 Starting 3 threads 00:27:50.478 00:27:50.479 filename0: (groupid=0, jobs=1): err= 0: pid=90001: Thu Dec 5 03:12:19 2024 00:27:50.479 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(247MiB/10015msec) 00:27:50.479 slat (nsec): min=8323, max=63825, avg=18058.12, stdev=6679.35 00:27:50.479 clat (usec): min=14468, max=17820, avg=15154.00, stdev=521.02 00:27:50.479 lat (usec): min=14484, max=17860, avg=15172.05, stdev=521.59 00:27:50.479 clat percentiles (usec): 00:27:50.479 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14746], 00:27:50.479 | 30.00th=[14877], 40.00th=[14877], 50.00th=[15008], 60.00th=[15008], 00:27:50.479 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15795], 95.00th=[16319], 00:27:50.479 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:27:50.479 | 99.99th=[17695] 00:27:50.479 bw ( KiB/s): min=24576, max=26112, per=33.33%, avg=25267.20, stdev=492.08, samples=20 00:27:50.479 iops : min= 192, max= 204, avg=197.40, stdev= 3.84, samples=20 00:27:50.479 lat (msec) : 20=100.00% 00:27:50.479 cpu : usr=92.47%, sys=6.88%, ctx=24, majf=0, minf=1074 00:27:50.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.479 filename0: (groupid=0, jobs=1): err= 0: pid=90002: Thu Dec 5 03:12:19 2024 00:27:50.479 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(247MiB/10013msec) 00:27:50.479 slat (nsec): min=5775, max=75592, avg=18597.85, stdev=6600.53 00:27:50.479 clat (usec): min=14471, max=17516, avg=15150.61, stdev=510.21 00:27:50.479 lat (usec): min=14485, max=17537, avg=15169.21, stdev=510.77 00:27:50.479 clat percentiles (usec): 00:27:50.479 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14746], 00:27:50.479 | 30.00th=[14877], 40.00th=[14877], 50.00th=[15008], 60.00th=[15008], 00:27:50.479 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15795], 95.00th=[16188], 00:27:50.479 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:27:50.479 | 99.99th=[17433] 00:27:50.479 bw ( KiB/s): min=24576, max=26112, per=33.34%, avg=25269.65, stdev=488.56, samples=20 00:27:50.479 iops : min= 192, max= 204, avg=197.40, stdev= 3.84, samples=20 00:27:50.479 lat (msec) : 20=100.00% 00:27:50.479 cpu : usr=92.32%, sys=7.06%, ctx=11, majf=0, minf=1072 00:27:50.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.479 filename0: (groupid=0, jobs=1): err= 0: pid=90003: Thu Dec 5 03:12:19 2024 00:27:50.479 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(247MiB/10014msec) 00:27:50.479 slat (nsec): min=5440, max=77903, avg=18557.57, stdev=6343.27 00:27:50.479 clat (usec): min=14464, max=17518, avg=15152.81, stdev=517.81 00:27:50.479 lat (usec): min=14473, max=17541, avg=15171.37, stdev=518.45 00:27:50.479 clat percentiles (usec): 00:27:50.479 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14746], 00:27:50.479 | 30.00th=[14877], 40.00th=[14877], 50.00th=[15008], 60.00th=[15008], 00:27:50.479 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15795], 95.00th=[16188], 00:27:50.479 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:27:50.479 | 99.99th=[17433] 00:27:50.479 bw ( KiB/s): min=24576, max=26112, per=33.33%, avg=25267.20, stdev=492.08, samples=20 00:27:50.479 iops : min= 192, max= 204, avg=197.40, stdev= 3.84, samples=20 00:27:50.479 lat (msec) : 20=100.00% 00:27:50.479 cpu : usr=92.49%, sys=6.89%, ctx=19, majf=0, minf=1074 00:27:50.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.479 00:27:50.479 Run status group 0 (all jobs): 00:27:50.479 READ: bw=74.0MiB/s (77.6MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=741MiB (777MB), run=10013-10015msec 00:27:50.479 ----------------------------------------------------- 00:27:50.479 Suppressions used: 00:27:50.479 count bytes template 00:27:50.479 5 44 /usr/src/fio/parse.c 00:27:50.479 1 8 libtcmalloc_minimal.so 00:27:50.479 1 904 libcrypto.so 00:27:50.479 ----------------------------------------------------- 00:27:50.479 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.479 00:27:50.479 real 0m12.276s 00:27:50.479 user 0m29.654s 00:27:50.479 sys 0m2.408s 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.479 03:12:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.479 ************************************ 00:27:50.479 END TEST fio_dif_digest 00:27:50.479 ************************************ 00:27:50.479 03:12:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:50.479 03:12:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:50.479 03:12:20 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.479 03:12:20 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.479 rmmod nvme_tcp 00:27:50.479 rmmod nvme_fabrics 00:27:50.479 rmmod nvme_keyring 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 89246 ']' 00:27:50.479 03:12:21 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 89246 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 89246 ']' 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 89246 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89246 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89246' 00:27:50.479 killing process with pid 89246 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 89246 00:27:50.479 03:12:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 89246 00:27:51.416 03:12:21 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:27:51.416 03:12:21 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:51.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:51.675 Waiting for block devices as requested 00:27:51.675 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.675 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:51.675 03:12:22 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:51.934 03:12:22 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:51.935 03:12:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.935 03:12:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:51.935 03:12:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.194 03:12:22 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:27:52.194 00:27:52.194 real 1m8.170s 00:27:52.194 user 4m4.671s 00:27:52.194 sys 0m19.258s 00:27:52.194 03:12:22 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.194 ************************************ 00:27:52.194 03:12:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.194 END TEST nvmf_dif 00:27:52.194 ************************************ 00:27:52.194 03:12:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.194 03:12:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.194 03:12:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.194 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:27:52.194 ************************************ 00:27:52.194 START TEST nvmf_abort_qd_sizes 00:27:52.194 ************************************ 00:27:52.194 03:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.194 * Looking for test storage... 00:27:52.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:52.194 03:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:52.194 03:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:27:52.194 03:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:27:52.194 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:52.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.195 --rc genhtml_branch_coverage=1 00:27:52.195 --rc genhtml_function_coverage=1 00:27:52.195 --rc genhtml_legend=1 00:27:52.195 --rc geninfo_all_blocks=1 00:27:52.195 --rc geninfo_unexecuted_blocks=1 00:27:52.195 00:27:52.195 ' 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:52.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.195 --rc genhtml_branch_coverage=1 00:27:52.195 --rc genhtml_function_coverage=1 00:27:52.195 --rc genhtml_legend=1 00:27:52.195 --rc geninfo_all_blocks=1 00:27:52.195 --rc geninfo_unexecuted_blocks=1 00:27:52.195 00:27:52.195 ' 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:52.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.195 --rc genhtml_branch_coverage=1 00:27:52.195 --rc genhtml_function_coverage=1 00:27:52.195 --rc genhtml_legend=1 00:27:52.195 --rc geninfo_all_blocks=1 00:27:52.195 --rc geninfo_unexecuted_blocks=1 00:27:52.195 00:27:52.195 ' 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:52.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.195 --rc genhtml_branch_coverage=1 00:27:52.195 --rc genhtml_function_coverage=1 00:27:52.195 --rc genhtml_legend=1 00:27:52.195 --rc geninfo_all_blocks=1 00:27:52.195 --rc geninfo_unexecuted_blocks=1 00:27:52.195 00:27:52.195 ' 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.195 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.468 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.468 03:12:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:52.469 Cannot find device "nvmf_init_br" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:52.469 Cannot find device "nvmf_init_br2" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:52.469 Cannot find device "nvmf_tgt_br" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.469 Cannot find device "nvmf_tgt_br2" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:52.469 Cannot find device "nvmf_init_br" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:52.469 Cannot find device "nvmf_init_br2" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:52.469 Cannot find device "nvmf_tgt_br" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:52.469 Cannot find device "nvmf_tgt_br2" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:52.469 Cannot find device "nvmf_br" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:52.469 Cannot find device "nvmf_init_if" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:52.469 Cannot find device "nvmf_init_if2" 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:52.469 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:52.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:52.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:27:52.748 00:27:52.748 --- 10.0.0.3 ping statistics --- 00:27:52.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.748 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:52.748 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:52.749 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:52.749 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:27:52.749 00:27:52.749 --- 10.0.0.4 ping statistics --- 00:27:52.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.749 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:52.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:27:52.749 00:27:52.749 --- 10.0.0.1 ping statistics --- 00:27:52.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.749 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:52.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:27:52.749 00:27:52.749 --- 10.0.0.2 ping statistics --- 00:27:52.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.749 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:52.749 03:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:53.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.315 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=90668 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 90668 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 90668 ']' 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.573 03:12:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:53.573 [2024-12-05 03:12:24.401591] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:53.573 [2024-12-05 03:12:24.401779] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.832 [2024-12-05 03:12:24.595919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.091 [2024-12-05 03:12:24.727714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.091 [2024-12-05 03:12:24.727806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.091 [2024-12-05 03:12:24.727834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.091 [2024-12-05 03:12:24.727849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.091 [2024-12-05 03:12:24.727866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.091 [2024-12-05 03:12:24.730044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.091 [2024-12-05 03:12:24.730188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.091 [2024-12-05 03:12:24.731215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.091 [2024-12-05 03:12:24.731253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.091 [2024-12-05 03:12:24.915387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.658 03:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:54.658 ************************************ 00:27:54.658 START TEST spdk_target_abort 00:27:54.658 ************************************ 00:27:54.658 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:27:54.658 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:54.658 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:54.658 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.658 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 spdk_targetn1 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 [2024-12-05 03:12:25.551547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 [2024-12-05 03:12:25.593830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:54.917 03:12:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:58.207 Initializing NVMe Controllers 00:27:58.207 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:27:58.207 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:58.207 Initialization complete. Launching workers. 00:27:58.207 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8433, failed: 0 00:27:58.207 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1099, failed to submit 7334 00:27:58.207 success 776, unsuccessful 323, failed 0 00:27:58.207 03:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:58.207 03:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:02.400 Initializing NVMe Controllers 00:28:02.400 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:02.400 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:02.400 Initialization complete. Launching workers. 00:28:02.400 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9024, failed: 0 00:28:02.400 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7860 00:28:02.400 success 395, unsuccessful 769, failed 0 00:28:02.400 03:12:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:02.400 03:12:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:05.688 Initializing NVMe Controllers 00:28:05.688 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:05.688 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:05.688 Initialization complete. Launching workers. 00:28:05.688 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27884, failed: 0 00:28:05.688 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2254, failed to submit 25630 00:28:05.688 success 375, unsuccessful 1879, failed 0 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.688 03:12:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90668 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 90668 ']' 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 90668 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90668 00:28:05.688 killing process with pid 90668 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90668' 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 90668 00:28:05.688 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 90668 00:28:06.257 00:28:06.257 real 0m11.346s 00:28:06.257 user 0m45.226s 00:28:06.257 sys 0m2.253s 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.257 ************************************ 00:28:06.257 END TEST spdk_target_abort 00:28:06.257 ************************************ 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.257 03:12:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:06.257 03:12:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:06.257 03:12:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.257 03:12:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:06.257 ************************************ 00:28:06.257 START TEST kernel_target_abort 00:28:06.257 ************************************ 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:06.257 03:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:06.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:06.516 Waiting for block devices as requested 00:28:06.516 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:06.775 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.040 No valid GPT data, bailing 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.040 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:07.041 No valid GPT data, bailing 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:07.041 No valid GPT data, bailing 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:07.041 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:07.299 No valid GPT data, bailing 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.299 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.300 03:12:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 --hostid=df5c4e32-2325-45d3-96aa-3fdfe3165f53 -a 10.0.0.1 -t tcp -s 4420 00:28:07.300 00:28:07.300 Discovery Log Number of Records 2, Generation counter 2 00:28:07.300 =====Discovery Log Entry 0====== 00:28:07.300 trtype: tcp 00:28:07.300 adrfam: ipv4 00:28:07.300 subtype: current discovery subsystem 00:28:07.300 treq: not specified, sq flow control disable supported 00:28:07.300 portid: 1 00:28:07.300 trsvcid: 4420 00:28:07.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.300 traddr: 10.0.0.1 00:28:07.300 eflags: none 00:28:07.300 sectype: none 00:28:07.300 =====Discovery Log Entry 1====== 00:28:07.300 trtype: tcp 00:28:07.300 adrfam: ipv4 00:28:07.300 subtype: nvme subsystem 00:28:07.300 treq: not specified, sq flow control disable supported 00:28:07.300 portid: 1 00:28:07.300 trsvcid: 4420 00:28:07.300 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:07.300 traddr: 10.0.0.1 00:28:07.300 eflags: none 00:28:07.300 sectype: none 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:07.300 03:12:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:10.584 Initializing NVMe Controllers 00:28:10.584 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:10.584 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:10.584 Initialization complete. Launching workers. 00:28:10.584 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24692, failed: 0 00:28:10.584 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24692, failed to submit 0 00:28:10.584 success 0, unsuccessful 24692, failed 0 00:28:10.584 03:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:10.584 03:12:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:13.872 Initializing NVMe Controllers 00:28:13.872 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:13.872 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:13.872 Initialization complete. Launching workers. 00:28:13.872 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57043, failed: 0 00:28:13.872 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23473, failed to submit 33570 00:28:13.872 success 0, unsuccessful 23473, failed 0 00:28:13.872 03:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:13.872 03:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.158 Initializing NVMe Controllers 00:28:17.158 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.158 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:17.158 Initialization complete. Launching workers. 00:28:17.158 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60916, failed: 0 00:28:17.158 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15210, failed to submit 45706 00:28:17.158 success 0, unsuccessful 15210, failed 0 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:17.158 03:12:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:17.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.295 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.555 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.555 00:28:18.555 real 0m12.354s 00:28:18.555 user 0m6.141s 00:28:18.555 sys 0m3.897s 00:28:18.555 03:12:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.555 03:12:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.555 ************************************ 00:28:18.555 END TEST kernel_target_abort 00:28:18.555 ************************************ 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.555 rmmod nvme_tcp 00:28:18.555 rmmod nvme_fabrics 00:28:18.555 rmmod nvme_keyring 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 90668 ']' 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 90668 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 90668 ']' 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 90668 00:28:18.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90668) - No such process 00:28:18.555 Process with pid 90668 is not found 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 90668 is not found' 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:18.555 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:19.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.124 Waiting for block devices as requested 00:28:19.124 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.124 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:19.383 03:12:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:19.383 00:28:19.383 real 0m27.378s 00:28:19.383 user 0m52.750s 00:28:19.383 sys 0m7.603s 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.383 03:12:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.383 ************************************ 00:28:19.383 END TEST nvmf_abort_qd_sizes 00:28:19.383 ************************************ 00:28:19.643 03:12:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:19.643 03:12:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.643 03:12:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.643 03:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:19.643 ************************************ 00:28:19.643 START TEST keyring_file 00:28:19.643 ************************************ 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:19.643 * Looking for test storage... 00:28:19.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.643 03:12:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.643 --rc genhtml_branch_coverage=1 00:28:19.643 --rc genhtml_function_coverage=1 00:28:19.643 --rc genhtml_legend=1 00:28:19.643 --rc geninfo_all_blocks=1 00:28:19.643 --rc geninfo_unexecuted_blocks=1 00:28:19.643 00:28:19.643 ' 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.643 --rc genhtml_branch_coverage=1 00:28:19.643 --rc genhtml_function_coverage=1 00:28:19.643 --rc genhtml_legend=1 00:28:19.643 --rc geninfo_all_blocks=1 00:28:19.643 --rc geninfo_unexecuted_blocks=1 00:28:19.643 00:28:19.643 ' 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.643 --rc genhtml_branch_coverage=1 00:28:19.643 --rc genhtml_function_coverage=1 00:28:19.643 --rc genhtml_legend=1 00:28:19.643 --rc geninfo_all_blocks=1 00:28:19.643 --rc geninfo_unexecuted_blocks=1 00:28:19.643 00:28:19.643 ' 00:28:19.643 03:12:50 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.643 --rc genhtml_branch_coverage=1 00:28:19.644 --rc genhtml_function_coverage=1 00:28:19.644 --rc genhtml_legend=1 00:28:19.644 --rc geninfo_all_blocks=1 00:28:19.644 --rc geninfo_unexecuted_blocks=1 00:28:19.644 00:28:19.644 ' 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:19.644 03:12:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.644 03:12:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.644 03:12:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.644 03:12:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.644 03:12:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.644 03:12:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.644 03:12:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.644 03:12:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:19.644 03:12:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:19.644 03:12:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.O6SUAkDg3S 00:28:19.644 03:12:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:19.644 03:12:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.O6SUAkDg3S 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.O6SUAkDg3S 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.O6SUAkDg3S 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sBRKkQ498G 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:19.904 03:12:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sBRKkQ498G 00:28:19.904 03:12:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sBRKkQ498G 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sBRKkQ498G 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=91677 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:19.904 03:12:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91677 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91677 ']' 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.904 03:12:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:19.904 [2024-12-05 03:12:50.716495] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:19.904 [2024-12-05 03:12:50.716723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91677 ] 00:28:20.163 [2024-12-05 03:12:50.910506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.422 [2024-12-05 03:12:51.014528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.422 [2024-12-05 03:12:51.202882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:20.991 03:12:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.991 [2024-12-05 03:12:51.698593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.991 null0 00:28:20.991 [2024-12-05 03:12:51.730598] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:20.991 [2024-12-05 03:12:51.730868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.991 03:12:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.991 03:12:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.991 [2024-12-05 03:12:51.758602] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:20.991 request: 00:28:20.991 { 00:28:20.991 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.991 "secure_channel": false, 00:28:20.991 "listen_address": { 00:28:20.991 "trtype": "tcp", 00:28:20.991 "traddr": "127.0.0.1", 00:28:20.991 "trsvcid": "4420" 00:28:20.991 }, 00:28:20.992 "method": "nvmf_subsystem_add_listener", 00:28:20.992 "req_id": 1 00:28:20.992 } 00:28:20.992 Got JSON-RPC error response 00:28:20.992 response: 00:28:20.992 { 00:28:20.992 "code": -32602, 00:28:20.992 "message": "Invalid parameters" 00:28:20.992 } 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:20.992 03:12:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=91694 00:28:20.992 03:12:51 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:20.992 03:12:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91694 /var/tmp/bperf.sock 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91694 ']' 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.992 03:12:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:21.251 [2024-12-05 03:12:51.849629] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:21.251 [2024-12-05 03:12:51.850054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91694 ] 00:28:21.251 [2024-12-05 03:12:52.022304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.510 [2024-12-05 03:12:52.146468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.510 [2024-12-05 03:12:52.314802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:22.078 03:12:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.078 03:12:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:22.078 03:12:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:22.078 03:12:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:22.337 03:12:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sBRKkQ498G 00:28:22.337 03:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sBRKkQ498G 00:28:22.596 03:12:53 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:22.596 03:12:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:22.596 03:12:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.596 03:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.596 03:12:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:22.856 03:12:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.O6SUAkDg3S == \/\t\m\p\/\t\m\p\.\O\6\S\U\A\k\D\g\3\S ]] 00:28:22.856 03:12:53 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:22.856 03:12:53 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:22.856 03:12:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.856 03:12:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:22.856 03:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.114 03:12:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sBRKkQ498G == \/\t\m\p\/\t\m\p\.\s\B\R\K\k\Q\4\9\8\G ]] 00:28:23.114 03:12:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:23.114 03:12:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.114 03:12:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.114 03:12:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.114 03:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.114 03:12:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.373 03:12:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:23.373 03:12:54 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:23.373 03:12:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.373 03:12:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:23.373 03:12:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.373 03:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.373 03:12:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:23.632 03:12:54 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:23.632 03:12:54 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.632 03:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.889 [2024-12-05 03:12:54.529905] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:23.889 nvme0n1 00:28:23.889 03:12:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:23.889 03:12:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.889 03:12:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.889 03:12:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.889 03:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.889 03:12:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:24.147 03:12:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:24.148 03:12:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:24.148 03:12:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:24.148 03:12:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:24.148 03:12:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:24.148 03:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.148 03:12:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:24.415 03:12:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:24.415 03:12:55 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.676 Running I/O for 1 seconds... 00:28:25.664 9685.00 IOPS, 37.83 MiB/s 00:28:25.664 Latency(us) 00:28:25.664 [2024-12-05T03:12:56.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.664 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:25.664 nvme0n1 : 1.01 9727.10 38.00 0.00 0.00 13115.31 8757.99 25499.46 00:28:25.664 [2024-12-05T03:12:56.508Z] =================================================================================================================== 00:28:25.664 [2024-12-05T03:12:56.508Z] Total : 9727.10 38.00 0.00 0.00 13115.31 8757.99 25499.46 00:28:25.664 { 00:28:25.664 "results": [ 00:28:25.664 { 00:28:25.664 "job": "nvme0n1", 00:28:25.664 "core_mask": "0x2", 00:28:25.664 "workload": "randrw", 00:28:25.664 "percentage": 50, 00:28:25.664 "status": "finished", 00:28:25.664 "queue_depth": 128, 00:28:25.664 "io_size": 4096, 00:28:25.664 "runtime": 1.008831, 00:28:25.664 "iops": 9727.09998007595, 00:28:25.664 "mibps": 37.996484297171676, 00:28:25.664 "io_failed": 0, 00:28:25.664 "io_timeout": 0, 00:28:25.664 "avg_latency_us": 13115.311326904015, 00:28:25.664 "min_latency_us": 8757.992727272727, 00:28:25.664 "max_latency_us": 25499.46181818182 00:28:25.664 } 00:28:25.664 ], 00:28:25.664 "core_count": 1 00:28:25.664 } 00:28:25.664 03:12:56 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:25.664 03:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:25.923 03:12:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:25.923 03:12:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:25.923 03:12:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:25.923 03:12:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.923 03:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.923 03:12:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.183 03:12:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:26.183 03:12:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:26.183 03:12:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:26.183 03:12:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.183 03:12:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.183 03:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.183 03:12:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:26.441 03:12:57 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:26.442 03:12:57 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.442 03:12:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:26.442 03:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:26.700 [2024-12-05 03:12:57.369621] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:26.700 [2024-12-05 03:12:57.369932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:26.700 [2024-12-05 03:12:57.370903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:26.700 [2024-12-05 03:12:57.371895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:26.700 [2024-12-05 03:12:57.372130] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:26.700 [2024-12-05 03:12:57.372260] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:26.700 [2024-12-05 03:12:57.372394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:26.700 request: 00:28:26.700 { 00:28:26.700 "name": "nvme0", 00:28:26.700 "trtype": "tcp", 00:28:26.700 "traddr": "127.0.0.1", 00:28:26.700 "adrfam": "ipv4", 00:28:26.700 "trsvcid": "4420", 00:28:26.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.700 "prchk_reftag": false, 00:28:26.700 "prchk_guard": false, 00:28:26.700 "hdgst": false, 00:28:26.700 "ddgst": false, 00:28:26.700 "psk": "key1", 00:28:26.700 "allow_unrecognized_csi": false, 00:28:26.700 "method": "bdev_nvme_attach_controller", 00:28:26.700 "req_id": 1 00:28:26.700 } 00:28:26.700 Got JSON-RPC error response 00:28:26.700 response: 00:28:26.700 { 00:28:26.701 "code": -5, 00:28:26.701 "message": "Input/output error" 00:28:26.701 } 00:28:26.701 03:12:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:26.701 03:12:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:26.701 03:12:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:26.701 03:12:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:26.701 03:12:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:26.701 03:12:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.701 03:12:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:26.701 03:12:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.701 03:12:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.701 03:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.959 03:12:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:26.959 03:12:57 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:26.959 03:12:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.959 03:12:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:26.959 03:12:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:26.959 03:12:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.959 03:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.218 03:12:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:27.218 03:12:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:27.218 03:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:27.475 03:12:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:27.475 03:12:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:27.733 03:12:58 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:27.733 03:12:58 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:27.733 03:12:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.991 03:12:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:27.991 03:12:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.O6SUAkDg3S 00:28:27.991 03:12:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.991 03:12:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:27.991 03:12:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:27.991 [2024-12-05 03:12:58.833705] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.O6SUAkDg3S': 0100660 00:28:27.991 [2024-12-05 03:12:58.833770] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:28.251 request: 00:28:28.251 { 00:28:28.251 "name": "key0", 00:28:28.251 "path": "/tmp/tmp.O6SUAkDg3S", 00:28:28.251 "method": "keyring_file_add_key", 00:28:28.251 "req_id": 1 00:28:28.251 } 00:28:28.251 Got JSON-RPC error response 00:28:28.251 response: 00:28:28.251 { 00:28:28.251 "code": -1, 00:28:28.251 "message": "Operation not permitted" 00:28:28.251 } 00:28:28.251 03:12:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:28.251 03:12:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.251 03:12:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.251 03:12:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.251 03:12:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.O6SUAkDg3S 00:28:28.251 03:12:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:28.251 03:12:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6SUAkDg3S 00:28:28.251 03:12:59 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.O6SUAkDg3S 00:28:28.251 03:12:59 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:28.251 03:12:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:28.251 03:12:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:28.251 03:12:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:28.251 03:12:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:28.251 03:12:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:28.819 03:12:59 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:28.819 03:12:59 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.819 03:12:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:28.819 03:12:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.820 03:12:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.820 [2024-12-05 03:12:59.617932] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.O6SUAkDg3S': No such file or directory 00:28:28.820 [2024-12-05 03:12:59.618000] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:28.820 [2024-12-05 03:12:59.618026] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:28.820 [2024-12-05 03:12:59.618038] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:28.820 [2024-12-05 03:12:59.618050] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:28.820 [2024-12-05 03:12:59.618067] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:28.820 request: 00:28:28.820 { 00:28:28.820 "name": "nvme0", 00:28:28.820 "trtype": "tcp", 00:28:28.820 "traddr": "127.0.0.1", 00:28:28.820 "adrfam": "ipv4", 00:28:28.820 "trsvcid": "4420", 00:28:28.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:28.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:28.820 "prchk_reftag": false, 00:28:28.820 "prchk_guard": false, 00:28:28.820 "hdgst": false, 00:28:28.820 "ddgst": false, 00:28:28.820 "psk": "key0", 00:28:28.820 "allow_unrecognized_csi": false, 00:28:28.820 "method": "bdev_nvme_attach_controller", 00:28:28.820 "req_id": 1 00:28:28.820 } 00:28:28.820 Got JSON-RPC error response 00:28:28.820 response: 00:28:28.820 { 00:28:28.820 "code": -19, 00:28:28.820 "message": "No such device" 00:28:28.820 } 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.820 03:12:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.820 03:12:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:28.820 03:12:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:29.078 03:12:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S7t2tcFa5T 00:28:29.078 03:12:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:29.078 03:12:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:29.079 03:12:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S7t2tcFa5T 00:28:29.079 03:12:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S7t2tcFa5T 00:28:29.079 03:12:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.S7t2tcFa5T 00:28:29.079 03:12:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7t2tcFa5T 00:28:29.079 03:12:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7t2tcFa5T 00:28:29.645 03:13:00 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.645 03:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.904 nvme0n1 00:28:29.904 03:13:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:29.904 03:13:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.904 03:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.904 03:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.904 03:13:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.904 03:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.162 03:13:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:30.162 03:13:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:30.162 03:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:30.420 03:13:01 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:30.421 03:13:01 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:30.421 03:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:30.421 03:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:30.421 03:13:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.679 03:13:01 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:30.679 03:13:01 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:30.679 03:13:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:30.679 03:13:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:30.679 03:13:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:30.679 03:13:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.679 03:13:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:30.938 03:13:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:30.938 03:13:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:30.938 03:13:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:31.196 03:13:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:31.196 03:13:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:31.196 03:13:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:31.454 03:13:02 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:31.454 03:13:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7t2tcFa5T 00:28:31.454 03:13:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7t2tcFa5T 00:28:31.454 03:13:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sBRKkQ498G 00:28:31.454 03:13:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sBRKkQ498G 00:28:31.713 03:13:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:31.713 03:13:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:32.280 nvme0n1 00:28:32.280 03:13:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:32.280 03:13:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:32.539 03:13:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:32.539 "subsystems": [ 00:28:32.539 { 00:28:32.539 "subsystem": "keyring", 00:28:32.539 "config": [ 00:28:32.539 { 00:28:32.539 "method": "keyring_file_add_key", 00:28:32.539 "params": { 00:28:32.539 "name": "key0", 00:28:32.539 "path": "/tmp/tmp.S7t2tcFa5T" 00:28:32.539 } 00:28:32.539 }, 00:28:32.539 { 00:28:32.539 "method": "keyring_file_add_key", 00:28:32.539 "params": { 00:28:32.539 "name": "key1", 00:28:32.539 "path": "/tmp/tmp.sBRKkQ498G" 00:28:32.539 } 00:28:32.539 } 00:28:32.539 ] 00:28:32.539 }, 00:28:32.539 { 00:28:32.539 "subsystem": "iobuf", 00:28:32.539 "config": [ 00:28:32.539 { 00:28:32.539 "method": "iobuf_set_options", 00:28:32.539 "params": { 00:28:32.539 "small_pool_count": 8192, 00:28:32.539 "large_pool_count": 1024, 00:28:32.539 "small_bufsize": 8192, 00:28:32.539 "large_bufsize": 135168, 00:28:32.539 "enable_numa": false 00:28:32.539 } 00:28:32.539 } 00:28:32.539 ] 00:28:32.539 }, 00:28:32.539 { 00:28:32.539 "subsystem": "sock", 00:28:32.539 "config": [ 00:28:32.539 { 00:28:32.539 "method": "sock_set_default_impl", 00:28:32.539 "params": { 00:28:32.539 "impl_name": "uring" 00:28:32.539 } 00:28:32.539 }, 00:28:32.539 { 00:28:32.539 "method": "sock_impl_set_options", 00:28:32.539 "params": { 00:28:32.539 "impl_name": "ssl", 00:28:32.539 "recv_buf_size": 4096, 00:28:32.539 "send_buf_size": 4096, 00:28:32.539 "enable_recv_pipe": true, 00:28:32.539 "enable_quickack": false, 00:28:32.539 "enable_placement_id": 0, 00:28:32.539 "enable_zerocopy_send_server": true, 00:28:32.539 "enable_zerocopy_send_client": false, 00:28:32.539 "zerocopy_threshold": 0, 00:28:32.539 "tls_version": 0, 00:28:32.539 "enable_ktls": false 00:28:32.539 } 00:28:32.539 }, 00:28:32.539 { 00:28:32.539 "method": "sock_impl_set_options", 00:28:32.539 "params": { 00:28:32.539 "impl_name": "posix", 00:28:32.539 "recv_buf_size": 2097152, 00:28:32.539 "send_buf_size": 2097152, 00:28:32.540 "enable_recv_pipe": true, 00:28:32.540 "enable_quickack": false, 00:28:32.540 "enable_placement_id": 0, 00:28:32.540 "enable_zerocopy_send_server": true, 00:28:32.540 "enable_zerocopy_send_client": false, 00:28:32.540 "zerocopy_threshold": 0, 00:28:32.540 "tls_version": 0, 00:28:32.540 "enable_ktls": false 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "sock_impl_set_options", 00:28:32.540 "params": { 00:28:32.540 "impl_name": "uring", 00:28:32.540 "recv_buf_size": 2097152, 00:28:32.540 "send_buf_size": 2097152, 00:28:32.540 "enable_recv_pipe": true, 00:28:32.540 "enable_quickack": false, 00:28:32.540 "enable_placement_id": 0, 00:28:32.540 "enable_zerocopy_send_server": false, 00:28:32.540 "enable_zerocopy_send_client": false, 00:28:32.540 "zerocopy_threshold": 0, 00:28:32.540 "tls_version": 0, 00:28:32.540 "enable_ktls": false 00:28:32.540 } 00:28:32.540 } 00:28:32.540 ] 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "subsystem": "vmd", 00:28:32.540 "config": [] 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "subsystem": "accel", 00:28:32.540 "config": [ 00:28:32.540 { 00:28:32.540 "method": "accel_set_options", 00:28:32.540 "params": { 00:28:32.540 "small_cache_size": 128, 00:28:32.540 "large_cache_size": 16, 00:28:32.540 "task_count": 2048, 00:28:32.540 "sequence_count": 2048, 00:28:32.540 "buf_count": 2048 00:28:32.540 } 00:28:32.540 } 00:28:32.540 ] 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "subsystem": "bdev", 00:28:32.540 "config": [ 00:28:32.540 { 00:28:32.540 "method": "bdev_set_options", 00:28:32.540 "params": { 00:28:32.540 "bdev_io_pool_size": 65535, 00:28:32.540 "bdev_io_cache_size": 256, 00:28:32.540 "bdev_auto_examine": true, 00:28:32.540 "iobuf_small_cache_size": 128, 00:28:32.540 "iobuf_large_cache_size": 16 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_raid_set_options", 00:28:32.540 "params": { 00:28:32.540 "process_window_size_kb": 1024, 00:28:32.540 "process_max_bandwidth_mb_sec": 0 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_iscsi_set_options", 00:28:32.540 "params": { 00:28:32.540 "timeout_sec": 30 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_nvme_set_options", 00:28:32.540 "params": { 00:28:32.540 "action_on_timeout": "none", 00:28:32.540 "timeout_us": 0, 00:28:32.540 "timeout_admin_us": 0, 00:28:32.540 "keep_alive_timeout_ms": 10000, 00:28:32.540 "arbitration_burst": 0, 00:28:32.540 "low_priority_weight": 0, 00:28:32.540 "medium_priority_weight": 0, 00:28:32.540 "high_priority_weight": 0, 00:28:32.540 "nvme_adminq_poll_period_us": 10000, 00:28:32.540 "nvme_ioq_poll_period_us": 0, 00:28:32.540 "io_queue_requests": 512, 00:28:32.540 "delay_cmd_submit": true, 00:28:32.540 "transport_retry_count": 4, 00:28:32.540 "bdev_retry_count": 3, 00:28:32.540 "transport_ack_timeout": 0, 00:28:32.540 "ctrlr_loss_timeout_sec": 0, 00:28:32.540 "reconnect_delay_sec": 0, 00:28:32.540 "fast_io_fail_timeout_sec": 0, 00:28:32.540 "disable_auto_failback": false, 00:28:32.540 "generate_uuids": false, 00:28:32.540 "transport_tos": 0, 00:28:32.540 "nvme_error_stat": false, 00:28:32.540 "rdma_srq_size": 0, 00:28:32.540 "io_path_stat": false, 00:28:32.540 "allow_accel_sequence": false, 00:28:32.540 "rdma_max_cq_size": 0, 00:28:32.540 "rdma_cm_event_timeout_ms": 0, 00:28:32.540 "dhchap_digests": [ 00:28:32.540 "sha256", 00:28:32.540 "sha384", 00:28:32.540 "sha512" 00:28:32.540 ], 00:28:32.540 "dhchap_dhgroups": [ 00:28:32.540 "null", 00:28:32.540 "ffdhe2048", 00:28:32.540 "ffdhe3072", 00:28:32.540 "ffdhe4096", 00:28:32.540 "ffdhe6144", 00:28:32.540 "ffdhe8192" 00:28:32.540 ] 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_nvme_attach_controller", 00:28:32.540 "params": { 00:28:32.540 "name": "nvme0", 00:28:32.540 "trtype": "TCP", 00:28:32.540 "adrfam": "IPv4", 00:28:32.540 "traddr": "127.0.0.1", 00:28:32.540 "trsvcid": "4420", 00:28:32.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.540 "prchk_reftag": false, 00:28:32.540 "prchk_guard": false, 00:28:32.540 "ctrlr_loss_timeout_sec": 0, 00:28:32.540 "reconnect_delay_sec": 0, 00:28:32.540 "fast_io_fail_timeout_sec": 0, 00:28:32.540 "psk": "key0", 00:28:32.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.540 "hdgst": false, 00:28:32.540 "ddgst": false, 00:28:32.540 "multipath": "multipath" 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_nvme_set_hotplug", 00:28:32.540 "params": { 00:28:32.540 "period_us": 100000, 00:28:32.540 "enable": false 00:28:32.540 } 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "method": "bdev_wait_for_examine" 00:28:32.540 } 00:28:32.540 ] 00:28:32.540 }, 00:28:32.540 { 00:28:32.540 "subsystem": "nbd", 00:28:32.540 "config": [] 00:28:32.540 } 00:28:32.540 ] 00:28:32.540 }' 00:28:32.540 03:13:03 keyring_file -- keyring/file.sh@115 -- # killprocess 91694 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91694 ']' 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91694 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91694 00:28:32.540 killing process with pid 91694 00:28:32.540 Received shutdown signal, test time was about 1.000000 seconds 00:28:32.540 00:28:32.540 Latency(us) 00:28:32.540 [2024-12-05T03:13:03.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.540 [2024-12-05T03:13:03.384Z] =================================================================================================================== 00:28:32.540 [2024-12-05T03:13:03.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91694' 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@973 -- # kill 91694 00:28:32.540 03:13:03 keyring_file -- common/autotest_common.sh@978 -- # wait 91694 00:28:33.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.475 03:13:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=91951 00:28:33.475 03:13:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 91951 /var/tmp/bperf.sock 00:28:33.475 03:13:04 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:33.475 03:13:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91951 ']' 00:28:33.475 03:13:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.475 03:13:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.475 03:13:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.475 03:13:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:33.475 "subsystems": [ 00:28:33.475 { 00:28:33.475 "subsystem": "keyring", 00:28:33.475 "config": [ 00:28:33.475 { 00:28:33.475 "method": "keyring_file_add_key", 00:28:33.475 "params": { 00:28:33.475 "name": "key0", 00:28:33.475 "path": "/tmp/tmp.S7t2tcFa5T" 00:28:33.475 } 00:28:33.475 }, 00:28:33.475 { 00:28:33.475 "method": "keyring_file_add_key", 00:28:33.475 "params": { 00:28:33.475 "name": "key1", 00:28:33.475 "path": "/tmp/tmp.sBRKkQ498G" 00:28:33.475 } 00:28:33.475 } 00:28:33.475 ] 00:28:33.475 }, 00:28:33.475 { 00:28:33.475 "subsystem": "iobuf", 00:28:33.475 "config": [ 00:28:33.475 { 00:28:33.475 "method": "iobuf_set_options", 00:28:33.475 "params": { 00:28:33.475 "small_pool_count": 8192, 00:28:33.475 "large_pool_count": 1024, 00:28:33.475 "small_bufsize": 8192, 00:28:33.475 "large_bufsize": 135168, 00:28:33.475 "enable_numa": false 00:28:33.475 } 00:28:33.475 } 00:28:33.475 ] 00:28:33.475 }, 00:28:33.475 { 00:28:33.475 "subsystem": "sock", 00:28:33.475 "config": [ 00:28:33.475 { 00:28:33.475 "method": "sock_set_default_impl", 00:28:33.475 "params": { 00:28:33.475 "impl_name": "uring" 00:28:33.475 } 00:28:33.475 }, 00:28:33.475 { 00:28:33.475 "method": "sock_impl_set_options", 00:28:33.475 "params": { 00:28:33.475 "impl_name": "ssl", 00:28:33.475 "recv_buf_size": 4096, 00:28:33.475 "send_buf_size": 4096, 00:28:33.475 "enable_recv_pipe": true, 00:28:33.475 "enable_quickack": false, 00:28:33.476 "enable_placement_id": 0, 00:28:33.476 "enable_zerocopy_send_server": true, 00:28:33.476 "enable_zerocopy_send_client": false, 00:28:33.476 "zerocopy_threshold": 0, 00:28:33.476 "tls_version": 0, 00:28:33.476 "enable_ktls": false 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "sock_impl_set_options", 00:28:33.476 "params": { 00:28:33.476 "impl_name": "posix", 00:28:33.476 "recv_buf_size": 2097152, 00:28:33.476 "send_buf_size": 2097152, 00:28:33.476 "enable_recv_pipe": true, 00:28:33.476 "enable_quickack": false, 00:28:33.476 "enable_placement_id": 0, 00:28:33.476 "enable_zerocopy_send_server": true, 00:28:33.476 "enable_zerocopy_send_client": false, 00:28:33.476 "zerocopy_threshold": 0, 00:28:33.476 "tls_version": 0, 00:28:33.476 "enable_ktls": false 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "sock_impl_set_options", 00:28:33.476 "params": { 00:28:33.476 "impl_name": "uring", 00:28:33.476 "recv_buf_size": 2097152, 00:28:33.476 "send_buf_size": 2097152, 00:28:33.476 "enable_recv_pipe": true, 00:28:33.476 "enable_quickack": false, 00:28:33.476 "enable_placement_id": 0, 00:28:33.476 "enable_zerocopy_send_server": false, 00:28:33.476 "enable_zerocopy_send_client": false, 00:28:33.476 "zerocopy_threshold": 0, 00:28:33.476 "tls_version": 0, 00:28:33.476 "enable_ktls": false 00:28:33.476 } 00:28:33.476 } 00:28:33.476 ] 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "subsystem": "vmd", 00:28:33.476 "config": [] 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "subsystem": "accel", 00:28:33.476 "config": [ 00:28:33.476 { 00:28:33.476 "method": "accel_set_options", 00:28:33.476 "params": { 00:28:33.476 "small_cache_size": 128, 00:28:33.476 "large_cache_size": 16, 00:28:33.476 "task_count": 2048, 00:28:33.476 "sequence_count": 2048, 00:28:33.476 "buf_count": 2048 00:28:33.476 } 00:28:33.476 } 00:28:33.476 ] 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "subsystem": "bdev", 00:28:33.476 "config": [ 00:28:33.476 { 00:28:33.476 "method": "bdev_set_options", 00:28:33.476 "params": { 00:28:33.476 "bdev_io_pool_size": 65535, 00:28:33.476 "bdev_io_cache_size": 256, 00:28:33.476 "bdev_auto_examine": true, 00:28:33.476 "iobuf_small_cache_size": 128, 00:28:33.476 "iobuf_large_cache_size": 16 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_raid_set_options", 00:28:33.476 "params": { 00:28:33.476 "process_window_size_kb": 1024, 00:28:33.476 "process_max_bandwidth_mb_sec": 0 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_iscsi_set_options", 00:28:33.476 "params": { 00:28:33.476 "timeout_sec": 30 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_nvme_set_options", 00:28:33.476 "params": { 00:28:33.476 "action_on_timeout": "none", 00:28:33.476 "timeout_us": 0, 00:28:33.476 "timeout_admin_us": 0, 00:28:33.476 "keep_alive_timeout_ms": 10000, 00:28:33.476 "arbitration_burst": 0, 00:28:33.476 "low_priority_weight": 0, 00:28:33.476 "medium_priority_weight": 0, 00:28:33.476 "high_priority_weight": 0, 00:28:33.476 "nvme_adminq_poll_period_us": 10000, 00:28:33.476 "nvme_ioq_poll_period_us": 0, 00:28:33.476 "io_queue_requests": 512, 00:28:33.476 "delay_cmd_submit": true, 00:28:33.476 "transport_retry_count": 4, 00:28:33.476 "bdev_retry_count": 3, 00:28:33.476 "transport_ack_timeout": 0, 00:28:33.476 "ctrlr_loss_timeout_sec": 0, 00:28:33.476 "reconnect_delay_sec": 0, 00:28:33.476 "fast_io_fail_timeout_sec": 0, 00:28:33.476 "disable_auto_failback": false, 00:28:33.476 "generate_uuids": false, 00:28:33.476 "transport_tos": 0, 00:28:33.476 "nvme_error_stat": false, 00:28:33.476 "rdma_srq_size": 0, 00:28:33.476 "io_path_stat": false, 00:28:33.476 "allow_accel_sequence": false, 00:28:33.476 "rdma_max_cq_size": 0, 00:28:33.476 "rdma_cm_event_timeout_ms": 0, 00:28:33.476 "dhchap_digests": [ 00:28:33.476 "sha256", 00:28:33.476 "sha384", 00:28:33.476 "sha512" 00:28:33.476 ], 00:28:33.476 "dhchap_dhgroups": [ 00:28:33.476 "null", 00:28:33.476 "ffdhe2048", 00:28:33.476 "ffdhe3072", 00:28:33.476 "ffdhe4096", 00:28:33.476 "ffdhe6144", 00:28:33.476 "ffdhe8192" 00:28:33.476 ] 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_nvme_attach_controller", 00:28:33.476 "params": { 00:28:33.476 "name": "nvme0", 00:28:33.476 "trtype": "TCP", 00:28:33.476 "adrfam": "IPv4", 00:28:33.476 "traddr": "127.0.0.1", 00:28:33.476 "trsvcid": "4420", 00:28:33.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.476 "prchk_reftag": false, 00:28:33.476 "prchk_guard": false, 00:28:33.476 "ctrlr_loss_timeout_sec": 0, 00:28:33.476 "reconnect_delay_sec": 0, 00:28:33.476 "fast_io_fail_timeout_sec": 0, 00:28:33.476 "psk": "key0", 00:28:33.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.476 "hdgst": false, 00:28:33.476 "ddgst": false, 00:28:33.476 "multipath": "multipath" 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_nvme_set_hotplug", 00:28:33.476 "params": { 00:28:33.476 "period_us": 100000, 00:28:33.476 "enable": false 00:28:33.476 } 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "method": "bdev_wait_for_examine" 00:28:33.476 } 00:28:33.476 ] 00:28:33.476 }, 00:28:33.476 { 00:28:33.476 "subsystem": "nbd", 00:28:33.476 "config": [] 00:28:33.476 } 00:28:33.476 ] 00:28:33.476 }' 00:28:33.476 03:13:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.476 03:13:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:33.476 [2024-12-05 03:13:04.087408] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:33.476 [2024-12-05 03:13:04.087735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91951 ] 00:28:33.476 [2024-12-05 03:13:04.250269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.735 [2024-12-05 03:13:04.337718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.735 [2024-12-05 03:13:04.568798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:33.994 [2024-12-05 03:13:04.677135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:34.281 03:13:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.281 03:13:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:34.281 03:13:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:34.281 03:13:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:34.281 03:13:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.544 03:13:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:34.544 03:13:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:34.544 03:13:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:34.544 03:13:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:34.544 03:13:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:34.544 03:13:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:34.544 03:13:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.803 03:13:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:34.803 03:13:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:34.803 03:13:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:34.803 03:13:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:34.803 03:13:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:34.803 03:13:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.803 03:13:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:35.062 03:13:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:35.062 03:13:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:35.062 03:13:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:35.062 03:13:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:35.321 03:13:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:35.321 03:13:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:35.321 03:13:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S7t2tcFa5T /tmp/tmp.sBRKkQ498G 00:28:35.321 03:13:06 keyring_file -- keyring/file.sh@20 -- # killprocess 91951 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91951 ']' 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91951 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91951 00:28:35.321 killing process with pid 91951 00:28:35.321 Received shutdown signal, test time was about 1.000000 seconds 00:28:35.321 00:28:35.321 Latency(us) 00:28:35.321 [2024-12-05T03:13:06.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.321 [2024-12-05T03:13:06.165Z] =================================================================================================================== 00:28:35.321 [2024-12-05T03:13:06.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91951' 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@973 -- # kill 91951 00:28:35.321 03:13:06 keyring_file -- common/autotest_common.sh@978 -- # wait 91951 00:28:36.257 03:13:06 keyring_file -- keyring/file.sh@21 -- # killprocess 91677 00:28:36.257 03:13:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91677 ']' 00:28:36.257 03:13:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91677 00:28:36.257 03:13:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91677 00:28:36.258 killing process with pid 91677 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91677' 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@973 -- # kill 91677 00:28:36.258 03:13:06 keyring_file -- common/autotest_common.sh@978 -- # wait 91677 00:28:38.163 00:28:38.163 real 0m18.317s 00:28:38.163 user 0m43.180s 00:28:38.163 sys 0m2.859s 00:28:38.163 03:13:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.163 03:13:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:38.163 ************************************ 00:28:38.163 END TEST keyring_file 00:28:38.163 ************************************ 00:28:38.163 03:13:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:28:38.163 03:13:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:38.163 03:13:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:38.163 03:13:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.163 03:13:08 -- common/autotest_common.sh@10 -- # set +x 00:28:38.163 ************************************ 00:28:38.163 START TEST keyring_linux 00:28:38.163 ************************************ 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:38.163 Joined session keyring: 695659673 00:28:38.163 * Looking for test storage... 00:28:38.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:38.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.163 --rc genhtml_branch_coverage=1 00:28:38.163 --rc genhtml_function_coverage=1 00:28:38.163 --rc genhtml_legend=1 00:28:38.163 --rc geninfo_all_blocks=1 00:28:38.163 --rc geninfo_unexecuted_blocks=1 00:28:38.163 00:28:38.163 ' 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:38.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.163 --rc genhtml_branch_coverage=1 00:28:38.163 --rc genhtml_function_coverage=1 00:28:38.163 --rc genhtml_legend=1 00:28:38.163 --rc geninfo_all_blocks=1 00:28:38.163 --rc geninfo_unexecuted_blocks=1 00:28:38.163 00:28:38.163 ' 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:38.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.163 --rc genhtml_branch_coverage=1 00:28:38.163 --rc genhtml_function_coverage=1 00:28:38.163 --rc genhtml_legend=1 00:28:38.163 --rc geninfo_all_blocks=1 00:28:38.163 --rc geninfo_unexecuted_blocks=1 00:28:38.163 00:28:38.163 ' 00:28:38.163 03:13:08 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:38.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.163 --rc genhtml_branch_coverage=1 00:28:38.163 --rc genhtml_function_coverage=1 00:28:38.163 --rc genhtml_legend=1 00:28:38.163 --rc geninfo_all_blocks=1 00:28:38.163 --rc geninfo_unexecuted_blocks=1 00:28:38.163 00:28:38.163 ' 00:28:38.163 03:13:08 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:38.163 03:13:08 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=df5c4e32-2325-45d3-96aa-3fdfe3165f53 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.163 03:13:08 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.163 03:13:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.164 03:13:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.164 03:13:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.164 03:13:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.164 03:13:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:38.164 03:13:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:38.164 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:38.164 /tmp/:spdk-test:key0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:38.164 03:13:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:38.164 03:13:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:38.164 /tmp/:spdk-test:key1 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92093 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:38.164 03:13:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92093 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92093 ']' 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.164 03:13:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:38.423 [2024-12-05 03:13:09.092882] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:38.423 [2024-12-05 03:13:09.093247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92093 ] 00:28:38.682 [2024-12-05 03:13:09.274232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.682 [2024-12-05 03:13:09.356815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.941 [2024-12-05 03:13:09.534432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:39.509 [2024-12-05 03:13:10.049290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.509 null0 00:28:39.509 [2024-12-05 03:13:10.081309] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:39.509 [2024-12-05 03:13:10.081532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:39.509 521036393 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:39.509 846052390 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92111 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:39.509 03:13:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92111 /var/tmp/bperf.sock 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92111 ']' 00:28:39.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.509 03:13:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.510 03:13:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.510 03:13:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.510 03:13:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:39.510 [2024-12-05 03:13:10.217087] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:39.510 [2024-12-05 03:13:10.217522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92111 ] 00:28:39.768 [2024-12-05 03:13:10.401443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.768 [2024-12-05 03:13:10.517953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.336 03:13:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.336 03:13:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:40.336 03:13:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:40.337 03:13:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:40.595 03:13:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:40.595 03:13:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:41.163 [2024-12-05 03:13:11.702534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:41.163 03:13:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:41.163 03:13:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:41.422 [2024-12-05 03:13:12.075762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:41.422 nvme0n1 00:28:41.422 03:13:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:41.422 03:13:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:41.422 03:13:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:41.422 03:13:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:41.422 03:13:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:41.422 03:13:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:41.681 03:13:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:41.681 03:13:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:41.681 03:13:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:41.681 03:13:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:41.681 03:13:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:41.681 03:13:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:41.681 03:13:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@25 -- # sn=521036393 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 521036393 == \5\2\1\0\3\6\3\9\3 ]] 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 521036393 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:41.940 03:13:12 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.199 Running I/O for 1 seconds... 00:28:43.135 10204.00 IOPS, 39.86 MiB/s 00:28:43.135 Latency(us) 00:28:43.135 [2024-12-05T03:13:13.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.135 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:43.135 nvme0n1 : 1.01 10228.26 39.95 0.00 0.00 12443.66 3932.16 16562.73 00:28:43.135 [2024-12-05T03:13:13.980Z] =================================================================================================================== 00:28:43.136 [2024-12-05T03:13:13.980Z] Total : 10228.26 39.95 0.00 0.00 12443.66 3932.16 16562.73 00:28:43.136 { 00:28:43.136 "results": [ 00:28:43.136 { 00:28:43.136 "job": "nvme0n1", 00:28:43.136 "core_mask": "0x2", 00:28:43.136 "workload": "randread", 00:28:43.136 "status": "finished", 00:28:43.136 "queue_depth": 128, 00:28:43.136 "io_size": 4096, 00:28:43.136 "runtime": 1.01024, 00:28:43.136 "iops": 10228.262591067469, 00:28:43.136 "mibps": 39.9541507463573, 00:28:43.136 "io_failed": 0, 00:28:43.136 "io_timeout": 0, 00:28:43.136 "avg_latency_us": 12443.66263075935, 00:28:43.136 "min_latency_us": 3932.16, 00:28:43.136 "max_latency_us": 16562.734545454547 00:28:43.136 } 00:28:43.136 ], 00:28:43.136 "core_count": 1 00:28:43.136 } 00:28:43.136 03:13:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:43.136 03:13:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:43.394 03:13:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:43.394 03:13:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:43.394 03:13:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:43.394 03:13:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:43.394 03:13:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:43.394 03:13:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.653 03:13:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:43.653 03:13:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:43.653 03:13:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:43.653 03:13:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.653 03:13:14 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:43.653 03:13:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:43.912 [2024-12-05 03:13:14.578455] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:43.912 [2024-12-05 03:13:14.579141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:43.912 [2024-12-05 03:13:14.580123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:43.912 [2024-12-05 03:13:14.581114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:43.912 [2024-12-05 03:13:14.581173] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:43.912 [2024-12-05 03:13:14.581221] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:43.912 [2024-12-05 03:13:14.581236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:43.912 request: 00:28:43.912 { 00:28:43.912 "name": "nvme0", 00:28:43.912 "trtype": "tcp", 00:28:43.912 "traddr": "127.0.0.1", 00:28:43.912 "adrfam": "ipv4", 00:28:43.912 "trsvcid": "4420", 00:28:43.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.912 "prchk_reftag": false, 00:28:43.912 "prchk_guard": false, 00:28:43.912 "hdgst": false, 00:28:43.912 "ddgst": false, 00:28:43.913 "psk": ":spdk-test:key1", 00:28:43.913 "allow_unrecognized_csi": false, 00:28:43.913 "method": "bdev_nvme_attach_controller", 00:28:43.913 "req_id": 1 00:28:43.913 } 00:28:43.913 Got JSON-RPC error response 00:28:43.913 response: 00:28:43.913 { 00:28:43.913 "code": -5, 00:28:43.913 "message": "Input/output error" 00:28:43.913 } 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@33 -- # sn=521036393 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 521036393 00:28:43.913 1 links removed 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@33 -- # sn=846052390 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 846052390 00:28:43.913 1 links removed 00:28:43.913 03:13:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92111 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92111 ']' 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92111 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92111 00:28:43.913 killing process with pid 92111 00:28:43.913 Received shutdown signal, test time was about 1.000000 seconds 00:28:43.913 00:28:43.913 Latency(us) 00:28:43.913 [2024-12-05T03:13:14.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.913 [2024-12-05T03:13:14.757Z] =================================================================================================================== 00:28:43.913 [2024-12-05T03:13:14.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92111' 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 92111 00:28:43.913 03:13:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 92111 00:28:44.847 03:13:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92093 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92093 ']' 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92093 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92093 00:28:44.847 killing process with pid 92093 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92093' 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 92093 00:28:44.847 03:13:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 92093 00:28:46.748 ************************************ 00:28:46.748 END TEST keyring_linux 00:28:46.748 ************************************ 00:28:46.748 00:28:46.748 real 0m8.531s 00:28:46.748 user 0m15.394s 00:28:46.748 sys 0m1.462s 00:28:46.748 03:13:17 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.748 03:13:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:46.748 03:13:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:46.748 03:13:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:46.748 03:13:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:46.748 03:13:17 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:46.748 03:13:17 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:46.748 03:13:17 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:46.748 03:13:17 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:46.748 03:13:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.748 03:13:17 -- common/autotest_common.sh@10 -- # set +x 00:28:46.748 03:13:17 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:46.748 03:13:17 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:46.748 03:13:17 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:46.748 03:13:17 -- common/autotest_common.sh@10 -- # set +x 00:28:48.652 INFO: APP EXITING 00:28:48.652 INFO: killing all VMs 00:28:48.652 INFO: killing vhost app 00:28:48.652 INFO: EXIT DONE 00:28:48.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.168 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:49.168 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:49.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.735 Cleaning 00:28:49.735 Removing: /var/run/dpdk/spdk0/config 00:28:49.735 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:49.735 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:49.735 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:49.735 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:49.735 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:49.735 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:49.735 Removing: /var/run/dpdk/spdk1/config 00:28:49.735 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:49.735 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:49.735 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:49.735 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:49.735 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:49.735 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:49.735 Removing: /var/run/dpdk/spdk2/config 00:28:49.735 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:49.735 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:49.735 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:49.735 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:49.735 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:49.735 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:49.993 Removing: /var/run/dpdk/spdk3/config 00:28:49.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:49.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:49.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:49.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:49.994 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:49.994 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:49.994 Removing: /var/run/dpdk/spdk4/config 00:28:49.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:49.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:49.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:49.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:49.994 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:49.994 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:49.994 Removing: /dev/shm/nvmf_trace.0 00:28:49.994 Removing: /dev/shm/spdk_tgt_trace.pid57438 00:28:49.994 Removing: /var/run/dpdk/spdk0 00:28:49.994 Removing: /var/run/dpdk/spdk1 00:28:49.994 Removing: /var/run/dpdk/spdk2 00:28:49.994 Removing: /var/run/dpdk/spdk3 00:28:49.994 Removing: /var/run/dpdk/spdk4 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57219 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57438 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57661 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57760 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57805 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57933 00:28:49.994 Removing: /var/run/dpdk/spdk_pid57951 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58110 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58318 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58479 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58576 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58679 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58790 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58887 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58932 00:28:49.994 Removing: /var/run/dpdk/spdk_pid58963 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59039 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59144 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59603 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59673 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59736 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59752 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59878 00:28:49.994 Removing: /var/run/dpdk/spdk_pid59894 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60016 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60032 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60096 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60118 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60172 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60190 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60372 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60410 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60493 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60845 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60863 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60912 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60932 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60965 00:28:49.994 Removing: /var/run/dpdk/spdk_pid60996 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61016 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61049 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61080 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61100 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61136 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61169 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61200 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61228 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61259 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61284 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61312 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61343 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61363 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61396 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61433 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61459 00:28:49.994 Removing: /var/run/dpdk/spdk_pid61500 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61584 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61619 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61646 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61681 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61702 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61722 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61771 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61802 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61837 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61859 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61880 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61902 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61923 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61945 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61966 00:28:50.253 Removing: /var/run/dpdk/spdk_pid61988 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62027 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62067 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62083 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62129 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62145 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62170 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62217 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62246 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62279 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62299 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62318 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62338 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62357 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62376 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62391 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62410 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62503 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62586 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62745 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62789 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62846 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62873 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62907 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62928 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62977 00:28:50.253 Removing: /var/run/dpdk/spdk_pid62999 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63089 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63128 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63201 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63325 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63413 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63464 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63582 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63642 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63687 00:28:50.253 Removing: /var/run/dpdk/spdk_pid63942 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64055 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64094 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64126 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64177 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64217 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64268 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64306 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64715 00:28:50.253 Removing: /var/run/dpdk/spdk_pid64760 00:28:50.253 Removing: /var/run/dpdk/spdk_pid65128 00:28:50.253 Removing: /var/run/dpdk/spdk_pid65619 00:28:50.253 Removing: /var/run/dpdk/spdk_pid65902 00:28:50.253 Removing: /var/run/dpdk/spdk_pid66847 00:28:50.253 Removing: /var/run/dpdk/spdk_pid67803 00:28:50.253 Removing: /var/run/dpdk/spdk_pid67932 00:28:50.253 Removing: /var/run/dpdk/spdk_pid68012 00:28:50.253 Removing: /var/run/dpdk/spdk_pid69475 00:28:50.253 Removing: /var/run/dpdk/spdk_pid69859 00:28:50.253 Removing: /var/run/dpdk/spdk_pid73589 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74008 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74122 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74271 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74311 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74352 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74387 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74511 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74660 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74851 00:28:50.253 Removing: /var/run/dpdk/spdk_pid74947 00:28:50.253 Removing: /var/run/dpdk/spdk_pid75160 00:28:50.253 Removing: /var/run/dpdk/spdk_pid75262 00:28:50.253 Removing: /var/run/dpdk/spdk_pid75368 00:28:50.512 Removing: /var/run/dpdk/spdk_pid75746 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76189 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76190 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76191 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76473 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76759 00:28:50.512 Removing: /var/run/dpdk/spdk_pid76767 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79130 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79550 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79553 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79899 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79917 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79932 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79966 00:28:50.512 Removing: /var/run/dpdk/spdk_pid79982 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80062 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80076 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80180 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80184 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80292 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80301 00:28:50.512 Removing: /var/run/dpdk/spdk_pid80748 00:28:50.513 Removing: /var/run/dpdk/spdk_pid80784 00:28:50.513 Removing: /var/run/dpdk/spdk_pid80895 00:28:50.513 Removing: /var/run/dpdk/spdk_pid80968 00:28:50.513 Removing: /var/run/dpdk/spdk_pid81346 00:28:50.513 Removing: /var/run/dpdk/spdk_pid81555 00:28:50.513 Removing: /var/run/dpdk/spdk_pid81994 00:28:50.513 Removing: /var/run/dpdk/spdk_pid82564 00:28:50.513 Removing: /var/run/dpdk/spdk_pid83433 00:28:50.513 Removing: /var/run/dpdk/spdk_pid84094 00:28:50.513 Removing: /var/run/dpdk/spdk_pid84103 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86124 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86191 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86260 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86330 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86462 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86528 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86595 00:28:50.513 Removing: /var/run/dpdk/spdk_pid86664 00:28:50.513 Removing: /var/run/dpdk/spdk_pid87054 00:28:50.513 Removing: /var/run/dpdk/spdk_pid88279 00:28:50.513 Removing: /var/run/dpdk/spdk_pid88428 00:28:50.513 Removing: /var/run/dpdk/spdk_pid88676 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89294 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89454 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89614 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89711 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89873 00:28:50.513 Removing: /var/run/dpdk/spdk_pid89986 00:28:50.513 Removing: /var/run/dpdk/spdk_pid90719 00:28:50.513 Removing: /var/run/dpdk/spdk_pid90750 00:28:50.513 Removing: /var/run/dpdk/spdk_pid90792 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91144 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91179 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91212 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91677 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91694 00:28:50.513 Removing: /var/run/dpdk/spdk_pid91951 00:28:50.513 Removing: /var/run/dpdk/spdk_pid92093 00:28:50.513 Removing: /var/run/dpdk/spdk_pid92111 00:28:50.513 Clean 00:28:50.513 03:13:21 -- common/autotest_common.sh@1453 -- # return 0 00:28:50.513 03:13:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:50.513 03:13:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.513 03:13:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.773 03:13:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:50.773 03:13:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.773 03:13:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.773 03:13:21 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:50.773 03:13:21 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:50.773 03:13:21 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:50.773 03:13:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:50.773 03:13:21 -- spdk/autotest.sh@398 -- # hostname 00:28:50.773 03:13:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:51.048 geninfo: WARNING: invalid characters removed from testname! 00:29:17.614 03:13:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:17.614 03:13:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.148 03:13:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:22.684 03:13:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:25.215 03:13:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:27.747 03:13:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:30.283 03:14:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:30.283 03:14:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:30.283 03:14:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:30.283 03:14:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:30.283 03:14:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:30.283 03:14:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:30.283 + [[ -n 5253 ]] 00:29:30.283 + sudo kill 5253 00:29:30.292 [Pipeline] } 00:29:30.310 [Pipeline] // timeout 00:29:30.318 [Pipeline] } 00:29:30.336 [Pipeline] // stage 00:29:30.343 [Pipeline] } 00:29:30.362 [Pipeline] // catchError 00:29:30.372 [Pipeline] stage 00:29:30.373 [Pipeline] { (Stop VM) 00:29:30.384 [Pipeline] sh 00:29:30.665 + vagrant halt 00:29:33.952 ==> default: Halting domain... 00:29:39.240 [Pipeline] sh 00:29:39.520 + vagrant destroy -f 00:29:42.056 ==> default: Removing domain... 00:29:42.327 [Pipeline] sh 00:29:42.610 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:29:42.620 [Pipeline] } 00:29:42.639 [Pipeline] // stage 00:29:42.645 [Pipeline] } 00:29:42.663 [Pipeline] // dir 00:29:42.669 [Pipeline] } 00:29:42.686 [Pipeline] // wrap 00:29:42.693 [Pipeline] } 00:29:42.707 [Pipeline] // catchError 00:29:42.718 [Pipeline] stage 00:29:42.721 [Pipeline] { (Epilogue) 00:29:42.736 [Pipeline] sh 00:29:43.071 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:48.389 [Pipeline] catchError 00:29:48.391 [Pipeline] { 00:29:48.406 [Pipeline] sh 00:29:48.688 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:48.768 Artifacts sizes are good 00:29:48.770 [Pipeline] } 00:29:48.785 [Pipeline] // catchError 00:29:48.797 [Pipeline] archiveArtifacts 00:29:48.804 Archiving artifacts 00:29:48.944 [Pipeline] cleanWs 00:29:48.956 [WS-CLEANUP] Deleting project workspace... 00:29:48.956 [WS-CLEANUP] Deferred wipeout is used... 00:29:48.962 [WS-CLEANUP] done 00:29:48.964 [Pipeline] } 00:29:48.980 [Pipeline] // stage 00:29:48.986 [Pipeline] } 00:29:49.000 [Pipeline] // node 00:29:49.006 [Pipeline] End of Pipeline 00:29:49.044 Finished: SUCCESS